perm filename HIST.CL[COM,LSP] blob
sn#769801 filedate 1984-09-16 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00021 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00004 00002 ∂14-Sep-84 1636 JonL.pa@Xerox.ARPA Re: CL History
C00018 00003 ∂18-Dec-81 0918 HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) information about Common Lisp implementation
C00022 00004 ∂21-Dec-81 0702 HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) Re: Extended-addressing Common Lisp
C00024 00005 ∂21-Dec-81 1512 HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) Common Lisp
C00027 00006 ∂21-Dec-81 0717 HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) Re: Common Lisp
C00029 00007 ∂02-Jan-82 0908 Griss at UTAH-20 (Martin.Griss) Com L
C00032 00008 ∂15-Jan-82 0109 RPG Rutgers lisp development project
C00045 00009 ∂19-Jan-82 1448 Feigenbaum at SUMEX-AIM more on common lisp
C00053 00010 ∂20-Jan-82 2132 Fahlman at CMU-20C Implementations
C00061 00011 ∂12-Sep-82 1623 RPG Vectors versus Arrays
C00067 00012 ∂12-Sep-82 1828 MOON at SCRC-TENEX Vectors versus Arrays
C00070 00013 ∂12-Sep-82 2131 Scott E. Fahlman <Fahlman at Cmu-20c> RPG on Vectors versus Arrays
C00076 00014 ∂13-Sep-82 1133 RPG Reply to Moon on `Vectors versus Arrays'
C00079 00015 ∂14-Sep-82 1823 JonL at PARC-MAXC Re: `Vectors versus Arrays', and the original compromise
C00085 00016 ∂04-Oct-82 2145 STEELE at CMU-20C /BALLOT/
C00181 00017 ∂13-Oct-82 1309 STEELE at CMU-20C Ballot results
C00238 00018 ∂14-Aug-83 1216 FAHLMAN@CMU-CS-C.ARPA Things to do
C00247 00019 ∂18-Aug-83 1006 @MIT-MC:benson@SCRC-TENEX What to do next
C00256 00020 ∂23-Mar-84 2248 GS70@CMU-CS-A Common Lisp Reference Manual
C00262 00021 ∂20-Jun-84 2152 GS70@CMU-CS-A.ARPA "ANSI Lisp" rumor
C00265 ENDMK
C⊗;
∂14-Sep-84 1636 JonL.pa@Xerox.ARPA Re: CL History
Received: from XEROX.ARPA by SU-AI.ARPA with TCP; 14 Sep 84 16:35:17 PDT
Received: from Semillon.ms by ArpaGateway.ms ; 14 SEP 84 16:31:56 PDT
Date: 14 Sep 84 16:31 PDT
From: JonL.pa@XEROX.ARPA
Subject: Re: CL History
In-reply-to: Dick Gabriel <RPG@SU-AI.ARPA>'s message of 14 Sep 84 08:48
PDT
To: RPG@SU-AI.ARPA
cc: steele@TL-20A.ARPA, jonl.PA@XEROX.ARPA
Below is a copy of a historical note which I sent to the group in August
1982, because the material was still fresh on my mind (and because it
was clear that Xerox was not then going to have a significant
involvement with Common Lisp, so that "history" would probably be my
last major contribution for some time to come).
Incidentally, the technal staff -- Larry, Bill, and myself -- have been
asked by Xerox management *not* to attend the meeting next week. Beau
Sheil and Gary Moskovitz (the new head of the A.I. Systems Business
Unit) will represent Xerox. Sigh.
------------------------------------------------------------------
Mail-from: Arpanet host MIT-MC rcvd at 24-AUG-82 1950-PDT
Date: 24 August 1982 22:44-EDT
From: Jon L White <JONL at MIT-MC>
Subject: Roots of "Yu-Shiang Lisp"
To: JONL at MIT-MC, RPG at SU-AI, Guy.Steele at CMU-10A,
Fahlman at CMU-10A
cc: MOON at MIT-MC, Shostak at SRI-CSL, Griss at UTAH-20, DLW at MIT-AI,
RG at MIT-AI, GSB at MIT-ML, Brooks at CMU-20C,
Scherliss at CMU-20C, Engelmore at USC-ISI, Balzer at USC-ISIB,
Hedrick at RUTGERS
In a brief attempt to remember the roots of "Yu-Shiang Lisp", subsequently
named COMMON LISP, I searched my old mail files which are still on-line,
and found a few tidbits of history. Mostly, my mail stuff got deleted,
but the "Call" for the conference at SRI on Apr 8, 1981, by Bob Engelmore
survived, along with an interchange, about a week after the "birth",
between Ed Feigenbaum and Scott Fahlman. These I've packeged up in the
file at MIT-MC JONL;COMMON HIST along with Chuck Hedrick's overall summary
of the April 8 meeting.
I'd like to ask you all to jog your memory cells, and see if any of the
uncertainties below can be filled in, and if additional significant
steps towards the CommonLisp can be identified. Needless to say, this
listing is a view from where I was standing during those days.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Mar 12, 1981: Bob Engelmore invites many Lisp implementors and users
from the ARPA community to a conference at SRI to clear up issues
surrounding the future of Lisp. Since ARPA was diminishing its
support of Lisp development and maintenance, his "call" may have
had the seeds of CommonLisp in it's second paragraph:
" . . . There are now several respectable Lisp dialects in
use, and others under development. The efficiency,
transportability and programming environment varies significantly
from one to the other. Although this pluralism will probably
continue indefinitely, perhaps we can identify a single
"community
standard" that can be maintained, documented and distributed in a
professional way, as was done with Interlisp for many years. "
Apr 8, 1981: Moby meeting at SRI. InterLisp crowd appears to be unified;
Scott Fahlman characterises the post-MacLisp crowd as 4 horses going in 5
directions. Nils Nilsson, over a glass of beer, asks Jonl to join SRI in
beginning a Lisp development and maintenance center; Jonl insists on RPG
being a principal of the effort. The advantages of a Lisp with Arpa
support, which "... can be maintained, documented and distributed in a
professional way ...", appeared enormous.
Apr 9, 1981: RPG, Jonl, and GLS ride in a cramped car to Livermore,
during which time the prospect of merging the Vax/NIL, S-1/NIL, and
Spice/Lisp projects is proposed to GLS. Some technical obstacles are
worked out. Later that night, at Brian Reed's house, SEF is apprised of
the prospects. He too quickly realizes the advantages of a common dialect
when presenting plans to funding agencies; more technical details are
worked out, in particular the administrative plan of the CMU folks that
the manual will be written first before coding commences, and the manual
will be under the control of GLS.
Apr 10, 1981: Jonl and RPG meet with Nils Nilsson, Gary Hendrix, Karl
Leavitt, Jack Goldberg, and Rob Shostack; brief outline is made of what
SRI would contribute, what Lawrence-Livermore would contribute, and what
CMU would contribute. Nils takes plans to Arpa to "get a reading".
Apr 13, 1981: More meetings between RPG, Jonl, and Goldberg, Leavitt,
Shostack. SRI has a proposal for a "portable InterLisp" in the works, and
the NIL/Spice plan is to be merged with that project, under the CSL
section. Details are worked out about how CMU will retain "ownership" of
the manual, but SRI will be a distribution center.
Later that week: Nils reports mixed reception in Washington from Arpa.
SEF and GLS are already back at CMU. Plans are made to meet at CMU
sometime "soon" since the S-1/NIL group will be re-locating to CMU for the
summer.
Next week: Feigenbaum gives tacit approval to the plan, in an Arpa-Net
letter to SEF. Such support is received with joy.
May 1981: Jonl and Shostak prepare a written plan for SRI involvement,
with a view to obtaining ARPA funding.
First week of June (Saturday): Meeting at CMU to resolve particular
language
issues. Attending were GLS, SEF, RPG, JONL, Bill Scherliss, and Rod
Brooks. A lot of time was spent on treatement of Multiple-values;
NIL versus () remains unresolved. Lunch is had at Ali Baba's, and
the name Yu-Shiang Lisp is proposed to replace Spice Lisp; also
proposed
is to retain the generic name NIL, but to specialize between
Spice/NIL
S-1/NIL, Vax/NIL etc. Importance is recognized of bringing in the
other post-MacLisp groups, notably Symbolics and LMI.
July: Report from ARPA looks negative for any funding for proposal from
SRI.
Summer: Symbolics greets the idea of a "standardizing" with much
support. Noftsker in particular deems it desirable to have a
common dialect on the Vax through which potential LispMachine
customers can be exposed to Lisp. Moon pushes for a name, which
by default seems to be heading for CommonLisp. GLS produces the
"Swiss Cheese" edition of the Spice Lisp manual.
Sept: Change in administration in ARPA casts new light on SRI hopes:
A big "smile" is offered to the plan, it is met with approval, but
but not with money. Later on, it appears that hopes for an ARPA
proposal are futile; word is around even that ARPA is pulling out
of the InterLisp/VAX support.
Last week of November 1981: Meeting in Cambridge, at Symbolics, to resolve
many issues; excelent "footwork" done by GLS to get a written notebook to
each attendee of the various issues, along with a Ballot sheet. First day
goes moderately; second day degenerates into much flaming. Many hard
issues postponed. Several other groups were now "aboard", in particular
the InterLisp community sent Bill vanMelle as a observer.
[At some point in time, RPG contacted the Utah people to get them to
interested. Also, RPG dealt with Masinter as a representative of the
InterLisp Community? Bill Woods at BBN also expresses interest in
the development, so that InterLisp can keep "up to date".]
Fall 1981: Michael Smith, major sales representative of DEC, asks for
advice on getting DEC into the lisp market. Both outside customers
and internal projects make it imperative that DEC do something soon.
Internally, Chinnaswamy in Engineering at the Marlborough plant, and
John Ulrich in the new "Knowledge Engineering" project at Tewksbury
apply internal pressure for DEC to take action quickly.
Mid December, 1981: Sam Fuller calls several people in to DEC for
consultation about what DEC can do to support Lisp. Jonl makes a case for
DEC joining the CommonLisp bandwagon, rather than any of the other options
namely: jump in wholeheartedly behind InterLisp/VAX, or behind Vax/NIL,
or (most likely) strike aout afresh with their own DEC Lisp. Chuch
Hedrick is given a contract by DEC's LCG (the TOPS-20 people) to do an
extended- addressing 20-Lisp, of whatever flavor is decided upon by the
VAX group.
Jan 1982: DEC gives CMU a short contract to develop a CommonLisp on the
VAX.
Spring 1982: Discussion continues via ARPA-net mails, culminating in a
very productive day long session at CMU on Aug 21, 1981.
∂18-Dec-81 0918 HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) information about Common Lisp implementation
Date: 18 Dec 1981 1214-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: information about Common Lisp implementation
To: rpg at SU-AI, jonl at MIT-AI
We are about to sign a contract with DEC's LCG whereby they sponsor us
to produce an extended addressing Lisp. We are still discussing whether
this should be Interlisp or Common Lisp. I can see good arguments in
both directions, and do not have a strong perference, but I would
slightly prefer Common Lisp. Do you know whether there are any
implementations of Common Lisp, or something reasonably close to it? I
am reconciled to producing my own "kernel", probably in assembly
language, though I have some other candidates in mind too. But I would
prefer not to have to do all of the Lisp code from scratch.
As you may know, DEC is probably going to support a Lisp for the VAX. My
guess is that we will be very likely to do the same dialect that is
decided upon there. The one exception would be if it looks like MIT (or
someone else) is going to do an extended implementation of Common Lisp.
If so, then we would probably do Interlisp, for completeness.
We have some experience in Lisp implementation now, since Elisp (the
extended implementation of Rutgers/UCI Lisp) is essentially finished.
(I.e. there are some extensions I want to put in, and some optimizations,
but it does allow any sane R/UCI Lisp code to run.) The interpreter now
runs faster than the original R/UCI lisp interpreter. Compiled code is
slightly slower, but we think this is due to the fact that we are not
yet compiling some things in line that should be. (Even CAR is not
always done in line!) The compiler is Utah's portable compiler,
modified for the R/UCI Lisp dialect. It does about what you would want
a Lisp compiler to do, except that it does not open code arithmetic
(though a later compiler has some abilities in that direction). I
suspect that for a Common Lisp implementation we would try to use the
PDP-10 Maclisp compiler as a base, unless it is too crufty to understand
or modify. Changing compilers to produce extended code turns out not to
be a very difficult job.
-------
∂21-Dec-81 0702 HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) Re: Extended-addressing Common Lisp
Date: 21 Dec 1981 0957-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Extended-addressing Common Lisp
To: JONL at MIT-XX
cc: rpg at SU-AI
In-Reply-To: Your message of 18-Dec-81 1835-EST
thanks. At the moment the problem is that DEC is not sure whether they
are interested in Common Lisp or Interlisp. We will probably
follow the decision they make for the VAX, which should be done
sometime within a month. What surprised me about that was from what I
can hear one of Interlisp's main advantages was supposed to be that the
project was further along on the VAX than the NIL project. That sounds
odd to me. I thought NIL had been released. You might want to talk
with some of the folks at DEC. The only one I know is Kalman Reti,
XCON.RETI@DEC-MARLBORO.
-------
∂21-Dec-81 1512 HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) Common Lisp
Date: 21 Dec 1981 1806-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Common Lisp
To: rpg at SU-AI, griss at UTAH-20
I just had a conversation with JonL which I found to be somewhat
unsettling. I had hoped that Common Lisp was a sign that the Maclisp
community was willing to start doing a common development effort. It
begins to look like this is not the case. It sounds to me like the most
we can hope for is a bunch of Lisps that will behave quite differently,
have completely different user facilities, but will have a common subset
of language facilities which will allow knowlegable users to write
transportable code, if they are careful. I.e. it looks a lot like the
old Standard Lisp effort, wherein you tried to tweak existing
implementations to support the Standard Lisp primitives. I thought more
or less everyone agreed that hadn't worked so well, which is why the new
efforts at Utah to do something really transportable. I thought
everybody agreed that these days the way you did a Lisp was to write
some small kernel in an implementation language, and then have a lot of
Lisp code, and that the Lisp code would be shared.
Supposing that we and DEC do agree to proceed with Common Lisp, would
you be interested in starting a Common Lisp sub-conspiracy, i.e. a group
of people interested in a shared Common Lisp implementation? While we
are going to have support from DEC, that support is going to be $70K
(including University overhead) which is going to be a drop in the
bucket if we have to do a whole system, rather than just a VM and some
tweaking.
-------
∂21-Dec-81 0717 HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) Re: Common Lisp
Date: 21 Dec 1981 1012-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Common Lisp
To: RPG at SU-AI
In-Reply-To: Your message of 20-Dec-81 2304-EST
thanks. Are you sure Utah is producing Common Lisp? they have a thing
they call Standard Lisp, which is something completely different. I have
never heard of a Common Lisp project there, and I work very closely with
their Lisp development people so I think I would have.
-------
I visited there the middle of last month for about 3 days and talked
the technical side of Common Lisp being implemented in their style. Martin told
me that if we only insisted on a small virtual machine with most of the
rest in Lisp code from the Common Lisp people he'd like to do it.
I've been looking at their stuff pretty closely for the much behind schedule
Lisp evaluation thing and I'm pretty impressed with them. We discussed
grafting my S-1 Lisp compiler front end on top of their portable compiler.
-rpg-
∂02-Jan-82 0908 Griss at UTAH-20 (Martin.Griss) Com L
Date: 2 Jan 1982 1005-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Com L
To: guy.steele at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20
I have retrieved the revisions and decisions, will look them over.
I will try to set up arrangements to be at POPL Mondat-Wednesday,
depends on flights,
What is Common LISP schedule, next meeting, etc? Will we be invited to
attend, or is this one of topics for us to dicuss, etc. at POPL.
What in fact are we to dicuss, and what should I be thinking about.
As I explained, I hope to finish this round of PSL implementation
on DEC-20, VAX and maybe even first version on 68000 by then.
We then will fill in some missing features, and start bringup up REDUCE,
meta-compiler, BIGfloats, and PictureRLISP graphics. At that point I
have accomplished a significant amount of my NSF goals this year.
Next step is to signficantly improve PSL, SYSLISP, merge with Mode Analysis
phase for improved LISP<->SYSLISP comunications and efficiency.
At the same time, we will be looking over various LISP systems to see what sort of good
features can be adapted, and what sort of compatibility packages (eg, UCI-LISP
package, FranzLISP package, etc).
Its certainly in this pahse that I could easily attempt to modify PSL to
provide a ComonLISP kernel, assuming that we have not already adapted much of the
code.
M
-------
∂15-Jan-82 0109 RPG Rutgers lisp development project
∂14-Jan-82 1625 HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) Rutgers lisp development project
Mail-from: ARPANET site RUTGERS rcvd at 13-Jan-82 2146-PST
Date: 14 Jan 1982 0044-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Rutgers lisp development project
To: bboard at RUTGERS, griss at UTAH-20, admin.mrc at SU-SCORE, jsol at RUTGERS
Remailed-date: 14 Jan 1982 1622-PST
Remailed-from: Mark Crispin
Remailed-to: Feigenbaum at SUMEX-AIM, REG at SU-AI
It now appears that we are going to do an implementation of Common Lisp
for the DEC-20. This project is being funded by DEC.
Why are we doing this project at all?
This project is being done because a number of our researchers are going
to want to be able to move their programs to other systems than the
DEC-20. We are proposing to get personal machines over the next few
years. Sri has already run into problem in trying to give AIMDS to
someone that only has a VAX. Thus we think our users are going to want
to move to a dialect that is widely portable.
Also, newer dialects have some useful new features. Although these
features can be put into Elisp, doing so will introduce
incompatibilities with old programs. R/UCI Lisp already has too many
inconsistencies introduced by its long history. It is probably better
to start with a dialect that has been designed in a coherent fashion.
Why Common Lisp?
There are only three dialects of Lisp that are in wide use within the
U.S. on a variety of systems: Interlisp, meta-Maclisp, and Standard
Lisp. (By meta-Maclisp I mean a family of dialects that are all
related to Maclisp and generally share ideas.) Of these, Standard Lisp
has a reputation of not being as "rich" a language, and in fact is not
taken seriously by most sites. This is not entirely fair, but there is
probably nothing we can do about that fact at this stage. So we are left
with Interlisp and meta-Maclisp. A number of implementors from the
Maclisp family have gotten together to define a common dialect that
combines the best features of their various dialects, while still being
reasonable in size. A manual is being produced for it, and once
finished will remain reasonably stable. (Can you believe it?
Documentation before coding!) This dialect is now called Common Lisp.
The advantages of Common Lisp over Interlisp are:
- outside of BBN and Xerox, the Lisp development efforts now going on
all seem to be in the Maclisp family, and now are being
redirected towards Common Lisp. These efforts include
CMU, the Lisp Machine companies (Symbolics, LMI), LRL and MIT.
- Interlisp has some features, particularly the spaghetti stack,
that make it impossible to implement as efficiently and cleanly
as Common Lisp. (Note that it is possible to get as good
effiency out of compiled code if you do not use these features,
and if you use special techniques when compiling. However that
doesn't help the interpreter, and is not as clean.)
- Because of these complexities in Interlisp, implementation is a
large and complex job. ARPA funded a fairly large effort at
ISI, and even that seems to be marginal. This comment is based
on the report on the ISI project produced by Larry Masinter,
<lisp>interlisp-vax-rpt.txt. Our only hope would be to take
the ISI implementation and attempt to transport it to the 20.
I am concerned that the result of this would be extremely slow.
I am also concerned that we might turn out not to have the
resources necessary to do it a good job.
- There seems to be a general feeling that Common Lisp will have a
number of attractive features as a language. (Notice that I am
not talking about user facilities, which will no doubt take some
time before they reach the level of Interlisp.) Even people
within Arpa are starting to talk about it as the language of the
future. I am not personally convinced that it is seriously
superior to Interlisp, but it is as good (again, at the language
level), and the general Maclisp community seems to have a number
of ideas that are significantly in advance of what is likely to
show up in Interlisp with the current support available for it.
There are two serious disadvantages of Common Lisp:
- It does not exist yet. As of this week, there now seem to be
sufficient resources committed to it that we can be sure it will
be implemented. The following projects are now committed, at a
level sufficient for success: VAX (CMU), DEC-20 (Rutgers), PERQ
and other related machines (CMU), Lisp Machine (Symbolics), S-1
(LRL). I believe this is sufficient to give the language a
"critical mass".
- It does not have user facilities defined for it. CMU is heavily
committed to the Spice (PERQ) implementation, and will produce
the appropriate tools. They appear to be funded sufficiently
that this will happen.
Why is DEC funding it, and what will be
our relationship with them?
LCG (the group within DEC that is responsible for the DEC-20) is
interested in increasing the software that will support the full 30-bit
address space possible in the DEC-20 architecture. (Our current
processor will only use 23 bits of this, but this is still much better
than what was supported by the old software, which is 18 bits.) They
are proceeding at a reasonable rate with the software that is supported
by DEC. However they recognize that many important languages were
developed outside of DEC, and that it will not be practical for them
to develop large-address-space implementations of all of them in-house.
Thus DEC is attempting to find places that are working on the more
important of these languages, and they are funding efforts to develop
large address versions. They are sponsoring us for Lisp, and Utah
for C. Pascal is being done in a slightly complex fashion. (In fact
some of our support from DEC is for Pascal.)
DEC does not expect to make money directly from these projects. We will
maintain control over the software we develop, and could sell support
for it if we wanted to. We are, of course, expected to make the software
widely available. (Most likely we will submit it to DECUS but also
distribute it ourselves.) What DEC gets out of it is that the large
address space DEC-20 will have a larger variety of software available
for it than otherwise. I believe this will be an important point for
them in the long run, since no one is going to want to buy a machine for
which only the Fortran compiler can generate programs larger than 256K.
Thus they are facing the following facts:
- they can't do things in house nearly as cheaply as universities
can do them.
- universities are no longer being as well funded to do language
development, particularly not for the DEC-20.
How will we go about it?
We have sufficient funding for one full-time person and one RA. Both
DEC and Rutgers are very slow about paperwork. But these people should
be in place sometime early this semester. The implementation will
involve a small kernel, in assembly language, with the rest done in
Lisp. We will get the Lisp code from CMU, and so will only have to do
the kernel. This project seems to be the same size as the Elisp
project, which was done within a year using my spare time and a month of
so of Josh's time. It seems clear that we have sufficient manpower. (If
you think maybe we have too much, I can only say that if we finish the
kernel sooner than planned, we will spend the time working on user
facilities, documentation, and helping users here convert to it.) CMU
plans to finish the VAX project in a year, with a preliminary version in
6 months and a polished release in a year. Our target is similar.
-------
∂19-Jan-82 1448 Feigenbaum at SUMEX-AIM more on common lisp
Scott:
Here are some messages I received recently. I'm worried about
Hedrick and the Vax. I'm not too worried about Lisp Machine, you guys,
and us guys (S-1). I am also worried about Griss and Standard Lisp,
which wants to get on the bandwagon. I guess I'd like to settle kernel
stuff first, fluff later.
I understand your worry about sequences etc. Maybe we could try
to split the effort of studying issues a little. I dunno. It was just
a spur of the moment thought.
-rpg-
∂19-Jan-82 1448 Feigenbaum at SUMEX-AIM more on common lisp
Date: 19 Jan 1982 1443-PST
From: Feigenbaum at SUMEX-AIM
Subject: more on common lisp
To: gabriel at SU-AI
Mail-from: ARPANET host PARC-MAXC rcvd at 19-Jan-82 1331-PST
Date: 19 Jan 1982 13:12 PST
From: Masinter at PARC-MAXC
to: Feigenbaum@sumex-aim
Subject: Common Lisp- reply to Hedrick
It is a shame that such misinformation gets such rapid dissemination....
Date: 19 Jan 1982 12:57 PST
From: Masinter at PARC-MAXC
Subject: Re: CommonLisp at Rutgers
To: Hedrick@Rutgers
cc: Masinter
A copy of your message to "bboard at RUTGERS, griss at UTAH-20, admin.mrc at
SU-SCORE, jsol at RUTGERS" was forwarded to me. I would like to rebut some of
the points in it:
I think that Common Lisp has the potential for being a good lisp dialect which
will carry research forward in the future. I do not think, however, that people
should underestimate the amount of time before Common Lisp could possibly be a
reality.
The Common Lisp manual is nowhere near being complete. Given the current
rate of progress, the Common Lisp language definition would probably not be
resolved for two years--most of the hard issues have merely been deferred (e.g.,
T and NIL, multiple-values), and there are many parts of the manual which are
simply missing. Given the number of people who are joining into the discussion,
some drastic measures will have to be taken to resolve some of the more serious
problems within a reasonable timeframe (say a year).
Beyond that, the number of things which would have to be done to bring up a
new implementation of CommonLisp lead me to believe that the kernel for
another machine, such as the Dec-20, would take on the order of 5 man-years at
least. For many of the features in the manual, it is essential that the be built
into the kernel (most notably the arithmetic features and the multiple-value
mechanism) rather than in shared Lisp code. I believe that many of these may
make an implementation of Common Lisp more "difficult to implement efficiently
and cleanly" than Interlisp.
I think that the Interlisp-VAX effort has been progressing quite well. They have
focused on the important problems before them, and are proceeding quite well. I
do not know for sure, but it is likely that they will deliver a useful system
complete with a programming enviornment long before the VAX/NIL project,
which has consumed much more resources. When you were interacting with the
group of Interlisp implementors at Xerox, BBN and ISI about implementing
Interlisp, we cautioned you about being optimistic about the amount of
manpower required. What seems to have happened is that you have come away
believing that Common Lisp would be easier to implement. I don't think that is
the case by far.
Given your current manpower estimate (one full-time person and one RA) I do
not believe you have the critical mass to bring off a useful implemention of
Common Lisp. I would hate to see a replay of the previous situation with
Interlisp-VAX, where budgets were made and machines bought on the basis of a
hopeless software project. It is not that you are not competent to do a reasonable
job of implementation, it is just that creating a new implementation of an already
specified language is much much harder than merely creating a new
implementation of a language originally designed for another processor.
I do think that an Interlisp-20 using extended virtual addressing might be
possible, given the amount of work that has gone into making Interlisp
transportable, the current number of compatible implementations (10, D, Jericho,
VAX) and the fact that Interlisp "grew up" in the Tenex/Tops-20 world, and that
some of the ordinarily more difficult problems, such as file names and operating
system conventions, are already tuned for that operating system. I think that a
year of your spare time and Josh for one month seems very thin.
Larry
-------
∂20-Jan-82 2132 Fahlman at CMU-20C Implementations
Date: 21 Jan 1982 0024-EST
From: Fahlman at CMU-20C
Subject: Implementations
To: rpg at SU-AI
cc: steele at CMU-20C, fahlman at CMU-20C
Dick,
I agree that, where a choice must be made, we should give first priority
to settling kernel-ish issues. However, I think that the debate on
sequence functions is not detracting from more kernelish things, so I
see no reason not to go on with that.
Thanks for forwarding Masinter's note to me. I found him to be awfully
pessimistic. I believe that the white pages will be essentially complete
and in a form that just about all of us can agree on within two months.
Of course, the Vax NIL crowd (or anyone else, for that matter) could delay
ratification indefinitely, even if the rest of us have come together, but I
think we had best deal with that when the need arises. We may have to
do something to force convergence if it does not occur naturally. My
estimate may be a bit optimistic, but I don't see how anyone can look at
what has happened since last April and decide that the white pages will
not be done for two years.
Maybe Masinter's two years includes the time to develop all of the
yellow pages stuff -- editors, cross referencers, and so on. If so, I
tend to agree with his estimate. To an Interlisper, Common Lisp will
not offer all of the comforts of home until all this is done and stable,
and a couple of years is a fair estimate for all of this stuff, given
that we haven't really started thinking about this. I certainly don't
expect the Interlisp folks to start flocking over until all this is
ready, but I think we will have the Perq and Vax implementations
together within 6 months or so and fairly stable within a year.
I had assumed that Guy had been keeping you informed of the negotiations
we have had with DEC on Common Lisp for VAX, but maybe he has not. The
situation is this: DEC has been extremely eager to get a Common Lisp up
on Vax VMS, due to pressure from Slumberger and some other customers,
plus their own internal plans for building some expert systems. Vax NIL
is not officially abandoned, but looks more and more dubious to them,
and to the rest of us. A couple of months ago, I proposed to DEC that
we could build them a fairly decent compiler just by adding a
post-processor to the Spice Lisp byte-code compiler. This
post-processor would turn the simple byte codes into in-line Vax
instructions and the more complex ones into jumps off to hand-coded
functions. Given this compiler, one could then get a Lisp system up
simply by using the Common Lisp in Common Lisp code that we have
developed for Spice. The extra effort to do the Vax implementation
amounts to only a few man-months and, once it is done, the system will
be totally compatible with the Spice implementation and will track any
improvements. With some additional optimizations and a bit of tuning,
the performance of this sytem should be comparable to any other Lisp on
the Vax, and probably better than Franz.
DEC responded to this proposal with more enthusiasm than I expected. It
is now nearly certain that they will be placing two DEC employees
(namely, ex-CMU grad students Dave McDonald and Water van Roggen) here
in Pittsburgh to work on this, with consulting by Guy and me. The goal
is to get a Common Lisp running on the Vax in six months, and to spend
the following 6 months tuning and polishing. I feel confident that this
goal will be met. The system will be done first for VMS, but I think we
have convinced DEC that they should invest the epsilon extra effort
needed to get a Unix version up as well.
So even if MIT totally drops the ball on VAX NIL, I think that it is a
pretty safe bet that a Common Lisp for Vax will be up within a year. If
MIT wins, so much the better: the world will have a choice between a
hairy NIL and a basic Common Lisp implementation.
We are suggesting to Chuck Hedrick that he do essentially the same thing
to bring up a Common Lisp for the extended-address 20. If he does, then
this implementation should be done in finite time as well, and should
end up being fully compatible with the other systems. If he decides
instead to do a traditinal brute-force implementation with lots of
assembly code, then I tend to agree with Masinter's view: it will take
forever.
I think we may have come up with an interesting kind of portability
here. Anyway, I thought you would be interested in hearing all the
latest news on this.
-- Scott
-------
∂12-Sep-82 1623 RPG Vectors versus Arrays
To: common-lisp at SU-AI
Watching the progress of the Common Lisp committee on the issue
of vectors over the past year I have come to the conclusion that
things are on the verge of being out of control. There isn't an
outstanding issue with regard to vectors versus arrays that
disturbs me especially as much as the trend of things - and almost
to the extent that I would consider removing S-1 Lisp from Common Lisp.
When we first started out there were vectors and arrays; strings and bit
vectors were vectors, and we had the situation where a useful data
structure - derivable from others, though it is - had a distinct name and
a set of facts about them that a novice user could understand without too
much trouble. At last November's meeting the Symbolics crowd convinced us
that changing things were too hard for them, so strings became
1-dimensional arrays. Now, after the most recent meeting, vectors have
been canned and we are left with `quick arrays' or `simple arrays' or
something (I guess they are 1-dimensional arrays, are named `simple
arrays', and are called `vectors'?).
Of course it is trivial to understand that `vectors' are a specialization
of n-dimensional arrays, but the other day McCarthy said something that
made me wonder about the idea of generalizing too far along these lines.
He said that mathematicians proceed by inventing a perfectly simple,
understandable object and then writing it up. Invariably someone comes
along a year later and says `you weren't thinking straight; your idea is
just a special case of x.' Things go on like this until we have things
like category theory that no one can really understand, but which have the
effect of being the most general generalization of everything.
There are two questions: one regarding where the generalization about vectors
and arrays should be, and one regarding how things have gone politically.
Perhaps in terms of pure programming language theory there is nothing
wrong with making vectors a special case of arrays, even to the extent of
making vector operations macros on array operations. However, imagine
explaining to a beginner, or a clear thinker, or your grandchildren, that
to get a `vector' you really make a `simple array' with all sorts of
bizarre options that simply inform the system that you want a streamlined
data structure. Imagine what you say when they ask you why you didn't just
include vectors to begin with.
Well, you can then go on to explain the joys of generalizations, how
n-dimensional arrays are `the right thing,' and then imagine how you
answer the question: `why, then, is the minimum maximum for n, 63?' I
guess that's 9 times easier to answer than if the minimum maximum were 7.
Clearly one can make this generalization and people can live with it.
We could make the generalization that LIST can take some other options,
perhaps stating that we want a CDR-coded list, and it can define some
accessor functions, and some auxilliary storage, and make arrays a
specialization of CONS cells, but that would be silly (wouldn't it??).
The point is that vectors are a useful enough concept to not need to suffer
being a specialization of something else.
The political point I will not make, but will leave to your imagination.
-rpg-
∂12-Sep-82 1828 MOON at SCRC-TENEX Vectors versus Arrays
Date: Sunday, 12 September 1982 21:23-EDT
From: MOON at SCRC-TENEX
To: Dick Gabriel <RPG at SU-AI>
Cc: common-lisp at SU-AI
Subject: Vectors versus Arrays
I think the point here, which perhaps you don't agree with, is that
"vector" is not a useful concept to a user (why is a vector different from
a 1-dimensional array?) It's only a useful concept to the implementor, who
thinks "vector = load the Lisp pointer into a base register and index off
of it", but "array = go call an interpretive subroutine to chase indirect
pointers", or the code-bummer, who thinks "vector = fast", "array = slow".
Removing the vector/array distinction from the guts of the language is in
much the same spirit as making the default arithmetic operators generic
across all types of numbers.
I don't think anyone from "the Symbolics crowd convinced us that changing
things were too hard for them"; our point was always that we thought it was
silly to put into a language designed in 1980 a feature that was only there
to save a few lines of code in the compiler for the VAX (and the S1), when
the language already requires declarations to achieve efficiency on those
machines.
If you have a reasonable rebuttal to this argument, I at least will listen.
It is important not to return to "four implementations going in four different
directions."
∂12-Sep-82 2131 Scott E. Fahlman <Fahlman at Cmu-20c> RPG on Vectors versus Arrays
Date: Sunday, 12 September 1982 23:47-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To: common-lisp at SU-AI
Subject: RPG on Vectors versus Arrays
I'm sure each of us could design a better language than Common Lisp is
turning out to be, and that each of those languages would be different.
My taste is close to RPG's, I think: in general, I like primitives that
I can build with better than generalizations that I can specialize.
However, Common Lisp is politics, not art. If we can come up with a
single language that we can all live with and use for real work, then we
will have accomplished a lot more than if we had individually gone off
an implemented N perfect Lisp systems.
When my grandchildren, if any, ask me why certain things turned out in
somewhat ugly ways, I will tell them that it is for the same reason that
slaves count as 3/5 of a person in the U.S. Constitution -- that is the
price you pay for keeping the South on board (or the North, depending).
A few such crocks are nothing to be ashamed of, as long as the language
is still something we all want to use. Even with the recent spate of
ugly compromises, I think we're doing pretty well overall.
For the record, I too believe that Common Lisp would be a clearer and
more intuitive language if it provided a simple vector data type,
documented as such, and presented hairy multi-D arrays with fill
pointers and displacement as a kind of structure built out of these
vectors. This is what we did in Spice Lisp, not to fit any particular
instruction set, but because it seemed obviously right, clear, and
easily maintainable. I have always felt, and still feel, that the Lisp
Machine folks took a wrong turn very early when they decided to provide
a hairy array datatype as primary with simple vectors as a degenerate
case.
Well, we proposed that Common Lisp should uniformly do this our way,
with vectors as primary, and Symbolics refused to go along with this. I
don't think this was an unreasonable refusal -- it would have required
an immense effort for them to convert, and most of them are now used to
their scheme and like it. They have a big user community already,
unlike the rest of us. So we have spent the last N months trying to
come up with a compromise whereby they could do things their way, we
could do things our way, and everything would still be portable and
non-confusing.
Unfortunately, these attempts to have it both ways led to all sorts of
confusing situations, and many of us gradually came to the conclusion
that, if we couldn't have things entirely our way, then doing things
pretty much the Lisp Machine way (with the addition of the simple-vector
hack) was the next best choice. In my opinion, the current proposal is
slightly worse than making vectors primary, but not much worse, and it
is certainly something that I can live with. The result in this case is
close to what Symbolics wanted all along, but I don't think this is the
result of any unreasonable political tactics on their part. Of course,
if RPG is seriously unhappy with the current proposal, we will have to
try again. There is always the possibility that the set of solutions
acceptable to RPG or to the S1 group does not intersect with the set
acceptable to Symbolics, and that a rift is inevitable, but let us hope
that it does not come down to that.
-- Scott
∂13-Sep-82 1133 RPG Reply to Moon on `Vectors versus Arrays'
To: common-lisp at SU-AI
The difference to a user between a vector and an array is that an array is
a general object, with many features, and a vector is a commonly used
object with few features: in the array-is-king scheme one achieves a
vector via specialization. An analogy can be made between arrays/vectors
and Swiss Army knives. A Swiss army knife is a fine piece of engineering;
and, having been at MIT for a while 10 years ago, I know that they are
well-loved there. However, though a keen chef might own a Swiss Army
knife, he uses his boning knife to de-bone - he could use his Swiss Army
knife via specialization. We all think of programs as programs, not as
categories with flow-of-control as mappings, and, though the latter
is correct, it is the cognitive overhead of it that makes us favor the
former over the latter.
To me the extra few lines of code in the compiler are meaningless (why
should a few extra lines bother the co-author of a 300-page compiler?); a
few extra lines of emitted code are not very relevant either if it comes
to that (it is , after all, an S-1). Had I been concerned with saving `a
few lines of code in the compiler' you can trust that I would have spoken
up earlier about many other things.
The only point I am arguing is that the cognitive overhead of making
vectors a degenerate array *may* be too high.
-rpg-
∂14-Sep-82 1823 JonL at PARC-MAXC Re: `Vectors versus Arrays', and the original compromise
Date: 14 Sep 1982 18:23 PDT
From: JonL at PARC-MAXC
Subject: Re: `Vectors versus Arrays', and the original compromise
In-reply-to: RPG's message of 13 Sep 1982 1133-PDT
To: Dick Gabriel <RPG at SU-AI>, Moon@mit-mc
cc: common-lisp at SU-AI
During the Nov 1981 CommonLisp meeting, the LispM folks (Symbolics, and
RG, and RMS) were adamantly against having any datatype for "chunked"
data other than arrays. I thought, however, that some sort of compromise was
reached shortly afterwards, at least with the Symbolics folks, whereby VECTORs
and STRINGs would exist in CL pretty much the way they do in other lisps not
specifically intended for special purpose computers (e.g., StandardLisp, PSL,
Lisp/370, VAX/NIL etc).
It was admitted that the Lispm crowd could emulate these datatypes by some
trivial variations on their existing array mechanisms -- all that would be forced
on the Lispm crowd is some kind of type-integrity for vectors and strings, and
all that would be forced on the implementors of the other CLs would be the
minimal amount for these two "primitive" datatypes. Portable code ought to use
CHAR or equivalent rather than AREF on strings, but that wouldn't be required,
since all the generic operations would still work for vectors and strings.
So the questions to be asked are:
1) How well have Lisps without fancy array facilities served their
user community? How well have they served the implementors
of that lisp? Franz and PDP10 MacLisp have only primitive
array facilities, and most of the other mentioned lisps have nothing
other than vectors and strings (and possibly bit vectors).
2) How much is the cost of requiring full-generality arrays to be
part of the white pages? For example, can it be assured that all
memory management for them will be written in portable CL, and
thus shared by all implementations? How many different compilers
will have to solve the "optimization" questions before the implementation
dependent upon that compiler will run in real time?
3) Could CL thrive with all the fancy stuff of arrays (leaders, fill pointers,
and even multiple-dimensioning) in the yellow pages? Could a CL
system be reasonably built up from only the VECTOR- and STRING-
specific operations (along with a primitive object-oriented thing, which for
lack of a better name I'll call EXTENDs, as in the NIL design)? As one
data point, I'll mention that VAX/NIL was so built, and clever things
like Flavors were indeed built over the primitives provided.
I'd think that the carefully considered opinions of those doing implementations
on "stock" hardware should prevail, since the extra work engendered for the
special-purpose hardware folks has got to be truly trivial.
It turns out that I've moved from the "stock" camp into the "special-purpose"
camp, and thus in one sense favor the current LispM approach to index-
accessible data (one big uniform data frob, the ARRAY). But this may
turn out to be relatively unimportant -- in talking with several sophisticated
Interlisp users, it seems that the more important issues for them are the ability
to have arrays with user-tailorable accessing methods (I may have to remind
you all that Interlisp doesn't even have multi-dimension arrays!), and the ability
to extend certain generic operators, like PLUS, to arrays (again, the reminder that
Interlisp currently has no standard for object-oriented programming, or for
procedural attachment).
∂04-Oct-82 2145 STEELE at CMU-20C /BALLOT/
Date: 5 Oct 1982 0041-EDT
From: STEELE at CMU-20C
Subject: /BALLOT/
To: common-lisp at SU-AI
cc: b.steele at CMU-10A
?????????????????????????????????????????????????????????????????????????????
? %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ?
? % ================================================================= % ?
? % = $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ = % ?
? % = $ +++++++++++++++++++++++++++++++++++++++++++++++++++++ $ = % ?
? % = $ + ############################################### + $ = % ?
? % = $ + # ///////////////////////////////////////// # + $ = % ?
? % = $ + # / The October 1982 Common LISP Ballot / # + $ = % ?
? % = $ + # ///////////////////////////////////////// # + $ = % ?
? % = $ + ############################################### + $ = % ?
? % = $ +++++++++++++++++++++++++++++++++++++++++++++++++++++ $ = % ?
? % = $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ = % ?
? % ================================================================= % ?
? %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ?
?????????????????????????????????????????????????????????????????????????????
Here is what you have all been waiting for! I need an indication of
consensus or lack thereof on the issues that have been discussed by
network mail since the August 1982 meeting, particularly on those issues
that were deferred for proposal for which proposals have now been made.
There are 28 questions, each requiring only a one-letter answer. As always,
if you don't like any of the choices, answer "x". To make my life easier
by permitting mechanical collation of responses, please respond as follows:
(a) send a reply message to Guy.Steele @ CMU-10A.
(b) *PLEASE* be sure the string "/BALLOT/" is in the subject line,
as it is in this message (the double quotes, not the slashes,
are metasyntactic!).
(c) The very first non-blank line of your message should have
exactly 29 non-blank characters on it. The first should be a
tilde ("~") and the rest should be your votes.
You may put spaces between the letters to improve readability.
(d) Following the first non-blank line, place any remarks about
issues on which you voted "x".
Thank you for your help. I would appreciate response by Friday, October 8.
--Guy
1. How shall the case for a floating-point exponent specifier
output by PRINT and FORMAT be determined?
(a) upper case, for example 3.5E6
(b) lower case, for example 3.5e6
(c) a switch
(d) implementation-dependent
2. Shall we change the name SETF to be SET? (y) yes (n) no
3. Shall there be a type specifier QUOTE, such that (QUOTE x) = (MEMBER x)?
Then MEMBER can be eliminated; (MEMBER x y z) = (OR 'x 'y 'z). Also one can
write such things as (OR INTEGER 'FOO) instead of (OR INTEGER (MEMBER FOO)).
(y) yes (n) no
4. Shall MOON's proposal for LOAD keywords, revised as shown below, be used?
(y) yes (n) no
----------------------------------------------------------------
Date: Wednesday, 25 August 1982, 14:01-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
[slightly revised]
Here is a revised proposal:
Keyword Default Meaning
:PACKAGE NIL NIL means use file's native package, non-NIL
is a package or name of package to load into.
:VERBOSE *LOAD-VERBOSE* T means print a message saying what file is
being loaded into which package.
:PRINT NIL T means print values of forms as they are evaluated.
:ERROR T T means handle errors normally; NIL means that
a file-not-found error should return NIL
rather than signalling an error. LOAD returns
the pathname (or truename??) of the file it
loaded otherwise.
:SET-DEFAULT-PATHNAME *LOAD-SET-DEFAULT-PATHNAME*
T means update the pathname default
for LOAD from the argument, NIL means don't.
:STREAM NIL Non-NIL means this is an open stream to be
loaded from. (In the Lisp machine, the
:CHARACTERS message to the stream is used to
determine whether it contains text or binary.)
The pathname argument is presumed to be associated
with the stream, in systems where that information
is needed.
The global variables' default values are implementation dependent, according
to local conventions, and may be set by particular users according to their
personal taste.
I left out keywords to allow using a different set of defaults from the normal
one and to allow explicit control over whether a text file or a binary file
is being loaded, since these don't really seem necessary. If we put them in,
the consistent names would be :DEFAULT-PATHNAME, :CHARACTERS, and :BINARY.
----------------------------------------------------------------
5. Shall closures over dynamic variables be removed from Common LISP?
(y) yes (n) no
6. Shall LOOP, as summarized below, be included in Common LISP?
(y) yes (n) no
----------------------------------------------------------------
Date: 26 August 1982 18:51-EDT
From: David A. Moon <MOON at MIT-MC>
Here is an extremely brief summary of the proposed new LOOP design, which
has not yet been finalized. Consult the writeup on LOOP in the Lisp
Machine manual or MIT LCS TM-169 for background information. Constructive
comments are very welcome, but please reply to BUG-LOOP at MIT-ML, not to
me personally.
(LOOP form form...) repeatedly evaluates the forms.
In general the body of a loop consists of a series of clauses. Each
clause is either: a series of one or more lists, which are forms to be
evaluated for effect, delimited by a symbol or the end of the loop; or
a clause-introducing symbol followed by idiosyncratic syntax for that
kind of clause. Symbols are compared with SAMEPNAMEP. Atoms other than
symbols are in error, except where a clause's idiosyncratic syntax permits.
1. Primary clauses
1.1 Iteration driving clauses
These clauses run a local variable through a series of values and/or
generate a test for when the iteration is complete.
REPEAT <count>
FOR/AS <var> ...
CYCLE <var> ...
I won't go into the full syntax here. Features include: setting
to values before starting/on the first iteration/on iterations after
the first; iterating through list elements/conses; iterating through
sequence elements, forwards or backwards, with or without sequence-type
declaration; iterating through arithmetic progressions. CYCLE reverts
to the beginning of the series when it runs out instead of terminating
the iteration.
It is also possible to control whether or not an end-test is generated
and whether there is a special epilogue only evaluated when an individual
end-test is triggered.
1.2 Prologue and Epilogue
INITIALLY form form... forms to be evaluated before starting, but
after binding local variables.
FINALLY form form... forms to be evaluated after finishing.
1.3 Delimiter
DO a sort of semicolon needed in odd situations to terminate a clause,
for example between an INITIALLY clause and body forms when no named
clause (e.g. an iteration-driving clause) intervenes.
We prefer this over parenthesization of clauses because of the
general philosophy that it is more important to make the simple cases
as readable as possible than to make micro-improvements in the
complicated cases.
1.4 Blockname
NAMED name Gives the block generated by LOOP a name so that
RETURN-FROM may be used.
This will be changed to conform with whatever is put into Common Lisp
for named PROGs and DOs, if necessary.
2. Relevant special forms
The following special forms are useful inside the body of a LOOP. Note
that they need not appear at top level, but may be nested inside other
Lisp forms, most usefully bindings and conditionals.
(COLLECT <value> [USING <collection-mode>] [INTO <var>] [BACKWARDS]
[FROM <initial-value>] [IF-NONE <expr>] [[TYPE] <type>])
This special form signals an error if not used lexically inside a LOOP.
Each time it is evaluated, <value> is evaluated and accumulated in a way
controlled by <collection-mode>; the default is to form an ordered list.
The accumulated values are returned from the LOOP if it is finished
normally, unless INTO is used to put them into a variable (which gets
bound locally to the LOOP). Certain accumulation modes (boolean AND and
OR) cause immediate termination of the LOOP as soon as the result is known,
when not collecting into a variable.
Collection modes are extensible by the user. A brief summary of predefined
ones includes aggregated boolean tests; lists (both element-by-element and
segment-by-segment); commutative/associative arithmetic operators (plus,
times, max, min, gcd, lcm, count); sets (union, intersection, adjoin);
forming a sequence (array, string).
Multiple COLLECT forms may appear in a single loop; they are checked for
compatibility (the return value cannot both be a list of values and a
sum of numbers, for example).
(RETURN value) returns immediately from a LOOP, as from any other block.
RETURN-FROM works too, of course.
(LOOP-FINISH) terminates the LOOP, executing the epilogue and returning
any value defined by a COLLECT special form.
[Should RESTART be interfaced to LOOP, or only be legal for plain blocks?]
3. Secondary clauses
These clauses are useful abbreviations for things that can also be done
using the primary clauses and Lisp special forms. They exist to make
simple cases more readable. As a matter of style, their use is strongly
discouraged in complex cases, especially those involving complex or
nested conditionals.
3.1 End tests
WHILE <expr> (IF (NOT <expr>) (LOOP-FINISH))
UNTIL <expr> (IF <expr> (LOOP-FINISH))
3.2 Conditionals
WHEN <expr> <clause> The clause is performed conditionally.
IF <expr> <clause> synonymous with WHEN
UNLESS <expr> <clause> opposite of WHEN
AND <clause> May be suffixed to a conditional. These two
ELSE <clause> might be flushed as over-complex.
3.3 Bindings
WITH <var> ... Equivalent to wrapping LET around the LOOP.
This exists to promote readability by decreasing
indentation.
3.4 Return values
RETURN <expr> synonymous with (RETURN <expr>)
COLLECT ... synonymous with (COLLECT ...)
NCONC ... synonymous with (COLLECT ... USING NCONC)
APPEND, SUM, COUNT, MINIMIZE, etc. are analogous
ALWAYS, NEVER, THEREIS abbreviations for boolean collection
4. Extensibility
There are ways for users to define new iteration driving clauses which
I will not go into here. The syntax is more flexible than the existing
path mechanism.
There are also ways to define new kinds of collection.
5. Compatibility
The second generation LOOP will accept most first-generation LOOP forms
and execute them in the same way, although this was not a primary goal.
Some complex (and unreadable!) forms will not execute the same way or
will be errors.
6. Documentation
We intend to come up with much better examples. Examples are very
important for developing a sense of style, which is really what LOOP
is all about.
----------------------------------------------------------------
7. Regardless of the outcome of the previous question, shall CYCLE
be retained and be renamed LOOP, with the understanding that statements
of the construct must be non-atomic, and atoms as "statements" are
reserved for extensions, and any such extensions must be compatible
with the basic mening as a pure iteration construct?
(y) yes (n) no
8. Shall ARRAY-DIMENSION be changed by exchanging its arguments,
to have the array first and the axis number second, to parallel
other indexing operations?
(y) yes (n) no
9. Shall MACROEXPAND, as described below, replace the current definition?
(y) yes (n) no
----------------------------------------------------------------
Date: Sunday, 29 August 1982, 21:26-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
Here is my promised proposal, with some help from Alan.
MACRO-P becomes a predicate rather than a pseudo-predicate.
Everything on pages 92-93 (29July82) is flushed.
Everything, including the compiler, expands macros by calling MACROEXPAND
or MACROEXPAND-1. A variable, *MACROEXPAND-HOOK*, is provided to allow
implementation of displacing, memoization, etc.
The easiest way to show the details of the proposal is as code. I'll try to
make it exemplary.
(DEFVAR *MACROEXPAND-HOOK* 'FUNCALL)
(DEFUN MACROEXPAND (FORM &AUX CHANGED)
"Keep expanding the form until it is not a macro-invocation"
(LOOP (MULTIPLE-VALUE (FORM CHANGED) (MACROEXPAND-1 FORM))
(IF (NOT CHANGED) (RETURN FORM))))
(DEFUN MACROEXPAND-1 (FORM)
"If the form is a macro-invocation, return the expanded form and T.
This is the only function that is allowed to call macro expander functions.
*MACROEXPAND-HOOK* is used to allow memoization."
(DECLARE (VALUES FORM CHANGED-FLAG))
(COND ((AND (PAIRP FORM) (SYMBOLP (CAR FORM)) (MACRO-P (CAR FORM)))
(LET ((EXPANDER (---get expander function--- (CAR FORM))))
---check for wrong number of arguments---
(VALUES (FUNCALL *MACROEXPAND-HOOK* EXPANDER FORM) T)))
(T FORM)))
;You can set *MACROEXPAND-HOOK* to this to get traditional displacing
(DEFUN DISPLACING-MACROEXPAND-HOOK (EXPANDER FORM)
(LET ((NEW-FORM (FUNCALL EXPANDER FORM)))
(IF (ATOM NEW-FORM)
(SETQ NEW-FORM `(PROGN ,NEW-FORM)))
(RPLACA FORM (CAR NEW-FORM))
(RPLACD FORM (CDR NEW-FORM))
FORM))
The above definition of MACROEXPAND-1 is oversimplified, since it can
also expand other things, including lambda-macros (the subject of a separate
proposal that has not been sent yet) and possibly implementation-dependent
things (substs in the Lisp machine, for example).
The important point here is the division of labor. MACROEXPAND-1 takes care
of checking the length of the macro-invocation to make sure it has the right
number of arguments [actually, the implementation is free to choose how much
of this is done by MACROEXPAND-1 and how much is done by code inserted into
the expander function by DEFMACRO]. The hook takes care of memoization. The
macro expander function is only concerned with translating one form into
another, not with bookkeeping. It is reasonable for certain kinds of
program-manipulation programs to bind the hook variable.
I introduced a second value from MACROEXPAND-1 instead of making MACROEXPAND
use the traditional EQ test. Otherwise a subtle change would have been
required to DISPLACING-MACROEXPAND-HOOK, and some writers of hooks might get
it wrong occasionally, and their code would still work 90% of the time.
Other issues:
On page 93 it says that MACROEXPAND ignores local macros established by
MACROLET. This is clearly incorrect; MACROEXPAND has to get called with an
appropriate lexical context available to it in the same way that EVAL does.
They are both parts of the interpreter. I don't have anything to propose
about this now; I just want to point out that there is an issue. I don't
think we need to deal with the issue immediately.
A related issue that must be brought up is whether the Common Lisp subset
should include primitives for accessing and storing macro-expansion
functions. Currently there is only a special form (MACRO) to set a
macro-expander, and no corresponding function. The Lisp machine expedient of
using the normal function-definition primitive (FDEFINE) with an argument of
(MACRO . expander) doesn't work in Common Lisp. Currently there is a gross
way to get the macro expander function, but no reasonable way. I don't have
a clear feeling whether there are programs that would otherwise be portable
except that they need these operations.
----------------------------------------------------------------
10. Shall all global system-defined variables have names beginning
and ending with "*", for example *PRINLEVEL* instead of PRINLEVEL
and *READ-DEFAULT-FLOAT-FORMAT* instead of READ←DEFAULT-FLOAT-FORMAT?
(y) yes (n) no
11. Same question for named constants (other than T and NIL), such as
*PI* for PI and *MOST-POSITIVE-FIXNUM* for MOST-POSITIVE-FIXNUM.
(y) yes (n) no (o) yes, but use a character other than "*"
12. Shall a checking form CHECK-TYPE be introduced as described below?
(y) yes (n) no
----------------------------------------------------------------
Date: Thursday, 26 August 1982, 03:04-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
See p.275 of the 29 July Common Lisp manual and p.275 of the revision
handed out at the Lisp conference.
I suggest that we include CHECK-ARG-TYPE in the language. Although
CHECK-ARG, CHECK-ARG-TYPE, and ASSERT have partially-overlapping
functionality, each has its own valuable uses and I think all three
ought to be in the language.
Note that CHECK-ARG and CHECK-ARG-TYPE are used when you want explicit
run-time checking, including but not limited to writing the interpreter
(which of course is written in Lisp, not machine language!).
The details:
CHECK-ARG-TYPE arg-name type &OPTIONAL type-string [macro]
If (TYPEP arg-name 'type) is false, signal an error. The error message
includes arg-name and a "pretty" English-language form of type, which
can be overridden by specifying type-string (this override is rarely
used). Proceeding from the error sets arg-name to a new value and
makes the test again.
Currently arg-name must be a variable, but it should be generalized to
any SETF'able place.
type and type-string are not evaluated.
This isn't always used for checking arguments, since the value of any
variable can be checked, but it is usually used for arguments and there
isn't an alternate name that more clearly describes what it does.
Date: 2 Sep 1982 12:30 PDT
From: JonL at PARC-MAXC
PDP10 MacLisp and VAX/NIL have had the name CHECK-TYPE for several
years for essentially this functionality (unless someone has recently renamed
it). Since it is used to certify the type of any variable's value, it did not
include the "-ARG" part. The motivation was to have a "checker" which was
more succinct than CHECK-ARGS, but which would generally open-code the
type test (and hence introduce no delay to the non-error case).
I rather prefer the semantics you suggested, namely that the second argument
to CHECK-TYPE be a type name (given the CommonLisp treatment of type
hierarchy). At some level, I'd think a "promise" of fast type checking should
be guaranteed (in compiled code) so that persons will prefer to use this
standardized facililty; without some indication of performance, one would
be tempted to write his own in order not to slow down the common case.
----------------------------------------------------------------
13. Shall a checking form CHECK-SUBSEQUENCE be introduced as described below?
(y) yes (n) no
----------------------------------------------------------------
Date: 2 Sep 1982 12:30 PDT
From: JonL at PARC-MAXC
If the general sequence functions continue to thrive in CommonLisp, I'd
like to suggest that the corresponding CHECK-SUBSEQUENCE macro (or
whatever renaming of it should occur) be included in CommonLisp.
CHECK-SUBSEQUENCE (<var> <start-index> <count>) &optional <typename>)
provides a way to certify that <var> holds a sequence datum of the type
<typename>, or of any suitable sequence type (e.g., LIST, or STRING or
VECTOR etc) if <typename> is null; and that the indicated subsequence
in it is within the size limits.
[GLS: probably <end> is more appropriate than <count> for Common LISP.]
----------------------------------------------------------------
14. Shall the functions LINE-OUT and STRING-OUT, eliminated in November,
be reinstated?
(y) yes (n) no
15. Shall the REDUCE function be added as described below?
(y) yes (n) no
----------------------------------------------------------------
Date: 3 September 1982 1756-EDT (Friday)
From: Guy.Steele at CMU-10A
I would like to mildly re-propose the REDUCE function for Common
LISP, now that adding it would require only one new function, not ten
or fifteen:
REDUCE function sequence &KEY :START :END :FROM-END :INITIAL-VALUE
The specified subsequence of "sequence" is reduced, using the "function"
of two arguments. The reduction is left-associative, unless
:FROM-END is not false, in which case it is right-associative.
If the an :INITIAL-VALUE is given, it is logically placed before the
"sequence" (after it if :FROM-END is true) and included in the
reduction operation. If no :INITIAL-VALUE is given, then the "sequence"
must not be empty. (An alternative specification: if no :INITIAL-VALUE
is given, and "sequence" is empty, then "function" is called with
zero arguments and the result returned. How about that? This idea
courtesy of Dave Touretzky.)
(REDUCE #'+ '(1 2 3 4)) => 10
(REDUCE #'- '(1 2 3 4)) => -8
(REDUCE #'- '(1 2 3 4) :FROM-END T) => -2 ;APL-style
(REDUCE #'LIST '(1 2 3 4)) => (((1 2) 3) 4)
(REDUCE #'LIST '(1 2 3 4) :FROM-END T) => (1 (2 (3 4)))
(REDUCE #'LIST '(1 2 3 4) :INITIAL-VALUE 'FOO) => ((((FOO 1) 2) 3) 4)
(REDUCE #'LIST '(1 2 3 4) :FROM-END T :INITIAL-VALUE 'FOO)
=> (1 (2 (3 (4 FOO))))
----------------------------------------------------------------
16. Shall the Bawden/Moon solution to the "invisible block" problem
be accepted? The solution is to define (RETURN x) to mean precisely
(RETURN-FROM NIL x), and to specify that essentially all standard
iterators produce blocks named NIL. A block with a name other than
NIL cannot capture a RETURN, only a RETURN-FROM with a matching name.
(y) yes (n) no
17. Shall the TAGBODY construct be incorporated? This expresses just
the behavior of the GO aspect of a PROG. Any atoms in the body
are not evaluated, but serve as tags that may be specified to GO.
Tags have lexical scope and dynamic extent. TAGBODY always returns NIL.
(y) yes (n) no
18. What shall be done about RESTART? The following alternatives seem to
be the most popular:
(a) Have no RESTART form.
(b) RESTART takes the name of a block. What happens when you say
(RESTART NIL) must be clarified for most iteration constructs.
(c) There is a new binding form called, say, RESTARTABLE.
Within (RESTARTABLE FOO . body), (RESTART FOO) acts as a jump
to the top of the body of the enclosing, matching RESTARTABLE form.
RESTART tags have lexical scope and dynamic extent.
19. Shall there be a built-in identity function, and if so, what shall it
be called?
(c) CR (i) IDENTITY (n) no such function
20. Shall the #*... bit-string syntax replace #"..."? That is, shall what
was before written #"10010" now be written #*10010 ?
(y) yes (n) no
21. Which of the two outstanding array proposals (below) shall be adopted?
(s) the "simple" proposal
(r) the "RPG memorial" proposal
(m) the "simple" proposal as amended by Moon
----------------------------------------------------------------
*********** "Simple" proposal **********
Date: Thursday, 16 September 1982 23:27-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
Here is a revision of my array proposal, fixed up in response to some of
the feedback I've received. See if you like it any better than the
original. In particular, I have explictly indicated that certain
redundant forms such as MAKE-VECTOR should be retained, and I have
removed the :PRINT keyword, since I now believe that it causes more
trouble than it is worth. A revised printing proposal appears at the
end of the document.
Arrays can be 1-D or multi-D. All arrays can be created by MAKE-ARRAY
and can be accessed with AREF. Storage is done via SETF of an AREF.
The term VECTOR refers to any array of exactly one dimension.
Vectors are special, in that they are also sequences, and can be
referenced by ELT. Also, only vectors can have fill pointers.
Vectors can be specialized along several distinct axes. The first is by
the type of the elements, as specified by the :ELEMENT-TYPE keyword to
MAKE-ARRAY. A vector whose element-type is STRING-CHAR is referred to
as a STRING. Strings, when they print, use the "..." syntax; they also
are the legal inputs to a family of string-functions, as defined in the
manual. A vector whose element-type is BIT (alias (MOD 2)), is a
BIT-VECTOR. These are special because they form the set of legal inputs
to the boolean bit-vector functions. (We might also want to print them
in a strange way -- see below.)
Some implementations may provide a special, highly efficient
representation for simple vectors. A simple vector is (of course) 1-D,
cannot have a fill pointer, cannot be displaced, and cannot be altered
in size after its creation. To get a simple vector, you use the :SIMPLE
keyword to MAKE-ARRAY with a non-null value. If there are any
conflicting options specified, an error is signalled. If an
implementation does not support simple vectors, this keyword/value is
ignored except that the error is still signalled on inconsistent cases.
We need a new set of type specifiers for simple things: SIMPLE-VECTOR,
SIMPLE-STRING, and SIMPLE-BIT-VECTOR, with the corresponding
type-predicate functions. Simple vectors are referenced by AREF in the
usual way, but the user may use THE or DECLARE to indicate at
compile-time that the argument is simple, with a corresponding increase
in efficiency. Implementations that do not support simple vectors
ignore the "simple" part of these declarations.
Strings (simple or non-simple) self-eval; all other arrays cause an
error when passed to EVAL. EQUAL descends into strings, but not
into any other arrays. EQUALP descends into arrays of all kinds,
comparing the corresponding elements with EQUALP. EQUALP is false
if the array dimensions are not the same, but it is not sensitive to
the element-type of the array, whether it is simple, etc. In comparing
the dimensions of vectors, EQUALP uses the length from 0 to the fill
pointer; it does not look at any elements beyond the fill pointer.
The set of type-specifiers required for all of this is ARRAY, VECTOR,
STRING, BIT-VECTOR, SIMPLE-VECTOR, SIMPLE-STRING, SIMPLE-BIT-VECTOR.
Each of these has a corresponding type-P predicate, and each can be
specified in list from, along with the element-type and dimension(s).
MAKE-ARRAY takes the following keywords: :ELEMENT-TYPE, :INITIAL-VALUE,
:INITIAL-CONTENTS, :FILL-POINTER, and :SIMPLE. There is still some
discussion as to whether we should retain array displacement, which
requires :DISPLACED-TO and :DISPLACED-INDEX-OFFSET.
The following functions are redundant, but should be retained for
clarity and emphasis in code: MAKE-VECTOR, MAKE-STRING, MAKE-BIT-VECTOR.
MAKE-VECTOR takes the same keywords as MAKE-ARRAY, but can only take a
single integer as the dimension argument. MAKE-STRING and
MAKE-BIT-VECTOR are like MAKE-VECTOR, but do not take the :ELEMENT-TYPE
keyword, since the element-type is implicit. Similarly, we should
retain the forms VREF, CHAR, and BIT, which are identical in operation
to AREF, but which declare their aray argument to be VECTOR, STRING, or
BIT-VECTOR, respectively.
If the :SIMPLE keyword is not specified to MAKE-ARRAY or related forms,
the default is NIL. However, vectors produced by random forms such as
CONCATENATE are simple, and vectors created when the reader sees #(...)
or "..." are also simple.
As a general rule, arrays are printed in a simple format that, upon
being read back in, produces a form that is EQUALP to the original.
However, some information may be lost in the printing process:
element-type restrictions, whether a vector is simple, whether it has a
fill pointer, whether it is displaced, and the identity of any element
that lies beyond the fill pointer. This choice was made to favor ease
of interactive use; if the user really wants to preserve in printed form
some complex data structure containing non-simple arrays, he will have
to develop his own printer.
A switch, SUPPRESS-ARRAY-PRINTING, is provided for users who have lots
of large arrays around and don't want to see them trying to print. If
non-null, this switch causes all arrays except strings to print in a
short, non-readable form that does not include the elements:
#<array-...>. In addition, the printing of arrays and vectors (but not
of strings) is subject to PRINLEVEL and PRINLENGTH.
Strings, simple or otherwise, print using the "..." syntax. Upon
read-in, the "..." syntax creates a simple string.
Bit-vectors, simple or otherwise, print using the #"101010..." syntax.
Upon read-in, this format produces a simple bit-vector. Bit vectors do
observe SUPPRESS-ARRAY-PRINTING.
All other vectors print out using the #(...) syntax, observing
PRINLEVEL, PRINLENGTH, and SUPPRESS-ARRAY-PRINTING. This format reads
in as a simple vector of element-type T.
All other arrays print out using the syntax #nA(...), where n is the
number of dimensions and the list is a nest of sublists n levels deep,
with the array elements at the deepest level. This form observes
PRINLEVEL, PRINLENGTH, and SUPPRESS-ARRAY-PRINTING. This format reads
in as an array of element-type T.
Query: I am still a bit uneasy about the funny string-like syntax for
bit vectors. Clearly we need some way to read these in that does not
turn into a type-T vector. An alternative might be to allow #(...) to
be a vector of element-type T, as it is now, but to take the #n(...)
syntax to mean a vector of element-type (MOD n). A bit-vector would
then be #2(1 0 1 0...) and we would have a parallel notation available
for byte vectors, 32-bit word vectors, etc. The use of the #n(...)
syntax to indicate the length of the vector always struck me as a bit
useless anyway. One flaw in this scheme is that it does not extend to
multi-D arrays. Before someone suggests it, let me say that I don't
like #nAm(...), where n is the rank and m is the element-type -- it
would be too hard to remember which number was which. But even with
this flaw, the #n(...) syntax might be useful.
********** "RPG memorial" proposal **********
Date: Thursday, 23 September 1982 00:38-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
Several people have stated that they dislike my earlier proposal because
it uses the good names (VECTOR, STRING, BIT-VECTOR, VREF, CHAR, BIT) on
general 1-D arrays, and makes the user say "simple" when he wants one of
the more specialized high-efficiency versions. This makes extra work
for users, who will want simple vectors at least 95% of the time. In
addition, there is the argument that simple vectors should be thought of
as a first-class data-type (in implementations that provide them) and
not as a mere degenerate form of array.
Just to see what it looks like, I have re-worked the earlier proposal to
give the good names to the simple forms. This does not really eliminate
any of the classes in the earlier proposal, since each of those classes
had some attributes or operations that distinguished it from the others.
Since there are getting to be a lot of proposals around, we need some
nomencalture for future discussions. My first attempt, with the
user-settable :PRINT option should be called the "print-switch"
proposal; the next one, with the heavy use of the :SIMPLE switch should
be the "simple-switch" proposal; this one can be called the "RPG
memorial" proposal. Let me know what you think about this vs. the
simple-switch version -- I can live with either, but I really would like
to nail this down pretty soon so that we can get on with the
implementation.
Arrays can be 1-D or multi-D. All arrays can be created by MAKE-ARRAY
and can be accessed with AREF. Storage is done via SETF of an AREF.
1-D arrays are special, in that they are also of type SEQUENCE, and can
be referenced by ELT. Also, only 1-D arrays can have fill pointers.
Some implementations may provide a special, highly efficient
representation for simple 1-D arrays, which will be of type VECTOR. A
vector is 1-dimensional, cannot have a fill pointer, cannot be
displaced, and cannot be altered in size after its creation. To get a
vector, you use the :VECTOR keyword to MAKE-ARRAY with a non-null value.
If there are any conflicting options specified, an error is signalled.
The MAKE-VECTOR form is equivalent to MAKE-ARRAY with :VECTOR T.
A STRING is a VECTOR whose element-type (specified by the :ELEMENT-TYPE
keyword) is STRING-CHAR. Strings are special in that they print using
the "..." syntax, and they are legal inputs to a class of "string
functions". Actually, these functions accept any 1-D array whose
element type is STRING-CHAR. This more general class is called a
CHAR-SEQUENCE.
A BIT-VECTOR is a VECTOR whose element-type is BIT, alias (MOD 2).
Bit-vectors are special in that they print using the #*... syntax, and
they are legal inputs to a class of boolean bit-vector functions.
Actually, these functions accept any 1-D array whose element-type is
BIT. This more general class is called a BIT-SEQUENCE.
All arrays can be referenced via AREF, but in some implementations
additional efficiency can be obtained by declaring certain objects to be
vectors, strings, or bit-vectors. This can be done by normal
type-declarations or by special accessing forms. The form (VREF v n) is
equivalent to (AREF (THE VECTOR v) n). The form (CHAR s n) is
equivalent to (AREF (THE STRING s) n). The form (BIT b n) is equivalent
to (AREF (THE BIT-VECTOR b) n).
If an implementation does not support vectors, the :VECTOR keyword is
ignored except that the error is still signalled on inconsistent cases;
The additional restrictions on vectors are not enforced. MAKE-VECTOR is
treated just like the equivalent make-array. VECTORP is true of every
1-D array, STRINGP of every CHAR-SEQUENCE, and BIT-VECTOR of every
BIT-SEQUENCE.
CHAR-SEQUENCEs, including strings, self-eval; all other arrays cause an
error when passed to EVAL. EQUAL descends into CHAR-SEQUENCEs, but not into
any other arrays. EQUALP descends into arrays of all kinds, comparing
the corresponding elements with EQUALP. EQUALP is false if the array
dimensions are not the same, but it is not sensitive to the element-type
of the array, whether it is a vector, etc. In comparing the dimensions of
vectors, EQUALP uses the length from 0 to the fill pointer; it does not
look at any elements beyond the fill pointer.
The set of type-specifiers required for all of this is ARRAY, VECTOR,
STRING, BIT-VECTOR, SEQUENCE, CHAR-SEQUENCE, and BIT-SEQUENCE.
Each of these has a corresponding type-P predicate, and each can be
specified in list from, along with the element-type and dimension(s).
MAKE-ARRAY takes the following keywords: :ELEMENT-TYPE, :INITIAL-VALUE,
:INITIAL-CONTENTS, :FILL-POINTER, :DISPLACED-TO, :DISPLACED-INDEX-OFFSET,
and :VECTOR.
The following functions are redundant, but should be retained for
clarity and emphasis in code: MAKE-VECTOR, MAKE-STRING, MAKE-BIT-VECTOR.
MAKE-VECTOR takes a single length argument, along with :ELEMENT-TYPE,
:INITIAL-VALUE, and :INITIAL-CONTENTS. MAKE-STRING and MAKE-BIT-VECTOR
are like MAKE-VECTOR, but do not take the :ELEMENT-TYPE keyword, since
the element-type is implicit.
If the :VECTOR keyword is not specified to MAKE-ARRAY or related forms,
the default is NIL. However, sequences produced by random forms such as
CONCATENATE are vectors.
Strings always are printed using the "..." syntax. Bit-vectors always
are printed using the #*... syntax. Other vectors always print using
the #(...) syntax. Note that in the latter case, any element-type
restriction is lost upon readin, since this form always produces a
vector of type T when it is read. However, the new vector will be
EQUALP to the old one. The #(...) syntax observes PRINLEVEL,
PRINLENGTH, and SUPPRESS-ARRAY-PRINTING. The latter switch, if non-NIL,
causes the array to print in a non-readable form: #<ARRAY...>.
CHAR-SEQUENCEs print out as though they were strings, using the "..."
syntax. BIT-SEQUENCES print out as BIT-STRINGS, using the #*... syntax.
All other arrays print out using the #nA(...) syntax, where n is the
number of dimensions and the list is actually a list of lists of lists,
nested n levels deep. The array elements appear at the lowest level.
The #A syntax also observes PRINLEVEL, PRINLENGTH, and
SUPPRESS-ARRAY-PRINTING. The #A format reads in as a non-displaced
array of element-type T.
Note that when an array is printed and read back in, the new version is
EQUALP to the original, but some information about the original is lost:
whether the original was a vector or not, element type restrictions,
whether the array was displaced, whether there was a fill pointer, and
the identity of any elements beyond the fill-pointer. This choice was
made to favor ease of interactive use; if the user really wants to
preserve in printed form some complex data structure containing more
complex arrays, he will have to develop his own print format and printer.
********** Moon revision of "simple" proposal **********
Date: Thursday, 30 September 1982 01:59-EDT
From: MOON at SCRC-TENEX
I prefer the "simple switch" to the "RPG memorial" proposal, with one
modification to be found below. The reason for this preference is that
it makes the "good" name, STRING for example, refer to the general class
of objects, relegating the efficiency decision to a modifier ("simple").
The alternative makes the efficiency issue too visible to the casual user,
in my opinion. You have to always be thinking "do I only want this to
work for efficient strings, which are called strings, or should it work
for all kinds of strings, which are called arrays of characters?".
Better to say, "well this works for strings, and hmm, is it worth
restricting it to simple-strings to squeeze out maximal efficiency"?
Lest this seem like I am trying to sabotage the efficiency of Lisp
implementations that are stuck with "stock" hardware, consider the
following:
In the simple switch proposal, how is (MAKE-ARRAY 100) different from
(MAKE-ARRAY 100 :SIMPLE T)? In fact, there is only one difference--it is
an error to use ADJUST-ARRAY-SIZE on the latter array, but not on the
former. Except for this, simpleness consists, simply, of the absence of
options. This suggests to me that the :SIMPLE option be flushed, and
instead a :ADJUSTABLE-SIZE option be added (see, I pronounce the colons).
Even on the Lisp machine, where :ADJUSTABLE-SIZE makes no difference, I
think it would be an improvement, merely for documentation purposes. Now
everything makes sense: if you don't ask for any special features in your
arrays, you get simple ones, which is consistent with the behavior of the
sequence functions returning simple arrays always. And if some
implementation decides they need the sequence functions to return
non-simple arrays, they can always add additional keywords to them to so
specify. The only time you need to know about the word "simple" at all is
if you are making type declarations for efficiency, in which case you have
to decide whether to declare something to be a STRING or a SIMPLE-STRING.
And it makes sense that the more restrictive declaration be a longer word.
This also meets RPG's objection, which I think boils down to the fact
that he thought it was stupid to have :SIMPLE T all over his programs.
He was right.
I'm fairly sure that I don't understand the portability issues that KMP
brought up (I don't have a whole lot of time to devote to this). But I
think that in my proposal STRINGP and SIMPLE-STRINGP are never the same
in any implementation; for instance, in the Lisp machine STRINGP is true
of all strings, while SIMPLE-STRINGP is only true of those that do not
have fill-pointers. If we want to legislate that the :ADJUSTABLE-SIZE
option is guaranteed to turn off SIMPLE-STRINGP, I expect I can dig up
a bit somewhere to remember the value of the option. This would in fact
mean that simple-ness is a completely implementation-independent concept,
and the only implementation-dependence is how much (if any) efficiency
you gain by using it, and how much of that efficiency you get for free
and how much you get only if you make declarations.
Perhaps the last sentence isn't obvious to everyone. On the LM-2 Lisp
machine, a simple string is faster than a non-simple string for many
operations. This speed-up happens regardless of declarations; it is a
result of a run-time dispatch to either fast microcode or slow microcode.
On the VAX with a dumb compiler and no tuning, a simple string is only
faster if you make declarations. On the VAX with a dumb compiler but some
obvious tuning of sequence and string primitives to move type checks out of
inner loops (making multiple copies of the inner loop), simple strings are
faster for these operations, but still slow for AREF unless you make a type
declaration. On the VAX with a medium-smart compiler that does the same
sort of tuning on user functions, simple strings are faster for user
functions, too, if you only declare (OPTIMIZE SPEED) [assuming that the
compiler prefers space over speed by default, which is the right choice in
most implementations], and save space as well as time if you go whole hog
and make a type declaration. On the 3600 Lisp machine, you have sort of a
combination of the first case and the last case.
I also support the #* syntax for bit vectors, rather than the #" syntax.
It's probably mere temporal accident that the simple switch proposal
uses #" while the RPG memorial proposal uses #*.
To sum up:
A vector is a 1-dimensional array. It prints as #(foo bar) or #<array...>
depending on the value of a switch.
A string is a vector of characters. It always prints as "foo". Unlike
all other arrays, strings self-evaluate and are compared by EQUAL.
A bit-vector is a vector of bits. It always prints as #*101. Since as
far as I can tell these are redundant with integers, perhaps like integers
they should self-evaluate and be compared by EQUAL. I don't care.
A simple-vector, simple-string, or simple-bit-vector is one of the above
with none of the following MAKE-ARRAY (or MAKE-STRING) options specified:
:FILL-POINTER
:ADJUSTABLE-SIZE
:DISPLACED-TO
:LEADER-LENGTH, :LEADER-LIST (in implementations that offer them)
There are type names and predicates for the three simple array types. In
some implementations using the type declaration gets you more efficient
code that only works for that simple type, which is why these are in the
language at all. There are no user-visible distinctions associated with
simpleness other than those implied by the absence of the above MAKE-ARRAY
options.
----------------------------------------------------------------
22. Shall the following proposal for the OPTIMIZE declaration be adopted?
(y) yes (n) no
----------------------------------------------------------------
Date: Wednesday, 15 September 1982 20:51-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
At the meeting I volunteered to produce a new proposal for the OPTIMIZE
declaration. Actually, I sent out such a proposal a couple of weeks
ago, but somehow it got lost before reaching SU-AI -- both that machine
and CMUC have been pretty flaky lately. I did not realize that the rest
of you had not seen this proposal until a couple of days ago.
Naturally, this is the one thing I did not keep a copy of, so here is my
reconstruction. I should say that this proposal is pretty ugly, but it
is the best that I've been able to come up with. If anyone out there
can do better, feel free.
Guy originally proposed a format like (DECLARE (OPTIMIZE q1 q2 q3)),
where each of the q's is a quality from the set {SIZE, SPEED, SAFETY}.
(He later suggested to me that COMPILATION-SPEED would be a useful
fourth quality.) The ordering of the qualities tells the system which
to optimize for. The obvious problem is that you sometimes want to go
for, say, SPEED above all else, but usually you want some level of
compromise. There is no way in this scheme to specify how strongly the
system should favor one quality over another. We don't need a lot of
gradations for most compilers, but the simple ordering is not expressive
enough.
One possibility is to simply reserve the OPTIMIZE declaration for the
various implementations, but not to specify what is done with it. Then
the implementor could specify in the red pages whatever declaration
scheme his compiler wants to follow. Unfortunately, this means that
such declarations would be of no use when the code is ported to another
Common Lisp, and users would have no portable way to flag that some
function is an inner loop and should be super-fast, or whatever. The
proposal below tries to provide a crude but adequate optimization
declaration for portable code, while still making it possible for users
to fine-tune the compiler's actions for particular implementations.
What I propose is (DECLARE (OPTIMIZE (qual1 value1) (qual2 value2) ...),
where the qualities are the four mentioned above and each is paired with
a value from 0 to 3 inclusive. The ordering of the clauses doesn't
matter, and any quality not specified gets a default value of 1. The
intent is that {1, 1, 1, 1} would be the compiler's normal default --
whatever set of compromises the implementor believes is appropriate for
his user community. A setting of 0 for some value is an indication that
the associated quality is unimportant in this context and may be
discrimintaed against freely. A setting of 2 indicates that the quality
should be favored more than normal, and a setting of 3 means to go all
out to favor that quality. Only one quality should be raised above 1 at
any one time.
The above specification scheme is crude, but sufficiently expressive for
most needs in portable code. A compiler implementor will have specific
decisions to make -- whether to suppress inline expansions, whether to
type-check the arguments to CAR and CDR, whether to check for overflow
on arithmetic declared to be FIXNUM, whether to run the peephole
optimizer, etc. -- and it is up to him to decide how to tie these
decisions to the above values so as to match the users expressed wishes.
These decision criteria should be spelled out in that implementation's red
pages. For example, it might be the case that the peephole optimizer is
not run if COMPILER-SPEED > 1, that type checking for the argument to
CAR and CDR is suppressed if SPEED > SAFETY+1, etc.
----------------------------------------------------------------
23. Shall it be permitted for macros calls to expand into DECLARE forms
and then be recognized as valid declarations? For example:
(DEFMACRO CUBOIDS (&REST VARS)
`(DECLARE (TYPE (ARRAY SHORT-FLONUM 3) ,@VARS)
(SPECIAL ,@VARS)
(OPTIMIZE SPEED)
(INLINE HACK-CUBOIDS)))
(DEFUN CUBOID-EXPERT (A B C D)
(CUBOIDS A C)
...)
This would not allows macros calls *within* a DECLARE form, only allow
macros to expand into a DECLARE form.
(y) yes (n) no
24. Shall there be printer control variables ARRAY-PRINLEVEL and
ARRAY-PRINLENGTH to control printing of arrays? These would not
limit the printing of strings.
(y) yes (n) no
25. Shall lambda macros, as described below, be incorporated into
the language, and if so, shall they occupy the function name space
or a separate name space?
(f) function name space (s) separate name space (n) no lambda macros
----------------------------------------------------------------
Date: Wednesday, 22 September 1982, 02:27-EDT
From: Howard I. Cannon <HIC at SCRC-TENEX at MIT-MC>
This is the documentation I wrote for lambda-macros as I implemented
them on the Lisp Machine. Please consider this a proposed definition.
Lambda macros may appear in functions where LAMBDA would have previously
appeared. When the compiler or interpreter detects a function whose CAR
is a lambda macro, they "expand" the macro in much the same way that
ordinary Lisp macros are expanded -- the lambda macro is called with the
function as its argument, and is expected to return another function as
its value. Lambda macros may be accessed with the (ε3:lambda-macroε*
ε2nameε*) function specifier.
defspec lambda-macro function-spec lambda-list &body body
Analagously with ε3macroε*, defines a lambda macro to be called
ε2function-specε*. ε2lambda-listε* should consist of one variable, which
will be the function that caused the lambda macro to be called. The
lambda macro must return a function. For example:
lisp
(lambda-macro ilisp (x)
`(lambda (&optional ,@(second x) &rest ignore) . ,(cddr x)))
end←lisp
would define a lambda macro called ε3ilispε* which would cause the
function to accept arguments like a standard Interlisp function -- all
arguments are optional, and extra arguments are ignored. A typical call
would be:
lisp
(fun-with-functional-arg #'(ilisp (x y z) (list x y z)))
end←lisp
Then, any calls to the functional argument that
ε3fun-with-functional-argε* executes will pass arguments as if the
number of arguments did not matter.
end←defspec
defspec deflambda-macro
ε3deflambda-macroε* is like ε3defmacroε*, but defines a lambda macro
instead of a normal macro.
end←defspec
defspec deflambda-macro-displace
ε3deflambda-macro-displaceε* is like ε3defmacro-displaceε*, but defines
a lambda macro instead of a normal macro.
end←defspec
defspec deffunction function-spec lambda-macro-name lambda-list &body body
ε3deffunctionε* defines a function with an arbitrary lambda macro
instead of ε3lambdaε*. It takes arguments like ε3defunε*, expect that
the argument immediatly following the function specifier is the name of
the lambda macro to be used. ε3deffunctionε* expands the lambda macro
immediatly, so the lambda macro must have been previously defined.
For example:
lisp
(deffunction some-interlisp-like-function ilisp (x y z)
(list x y z))
end←lisp
would define a function called ε3some-interlisp-like-functionε*, that
would use the lambda macro called ε3ilispε*. Thus, the function would
do no number of arguments checking.
end←defspec
----------------------------------------------------------------
26. Shall the floating-point manipulations described below be adopted?
(y) as described by MOON
(a) as amended (FLOAT-SIGN changed) by GLS
(n) do not adopt them
----------------------------------------------------------------
Date: Thursday, 30 September 1982 05:55-EDT
From: MOON at SCRC-TENEX
I am not completely happy with the FLOAT-FRACTION, FLOAT-EXPONENT, and
SCALE-FLOAT functions in the Colander edition. At the meeting in August I
was assigned to make a proposal. I am slow.
A minor issue is that the range of FLOAT-FRACTION fails to include zero (of
course it has to), and is inclusive at both ends, which means that there
are two possible return values for some numbers. I guess that this ugliness
has to stay because some implementations require this freedom for hardware
reasons, and it doesn't make a big difference from a numerical analysis point
of view. My proposal is to include zero in the range and to add a note about
two possible values for numbers that are an exact power of the base.
A more major issue is that some applications that break down a flonum into
a fraction and an exponent, or assemble a flonum from a fraction and an
exponent, are best served by representing the fraction as a flonum, while
others are best served by representing it as an integer. An example of
the former is a numerical routine that scales its argument into a certain
range. An example of the latter is a printing routine that must do exact
integer arithmetic on the fraction.
In the agenda for the August meeting it was also proposed that there be
a function to return the precision of the representation of a given flonum
(presumably in bits); this would be in addition to the "epsilon" constants
described on page 143 of the Colander.
A goal of all this is to make it possible to write portable numeric functions,
such as the trigonometric functions and my debugged version of Steele's
totally accurate floating-point number printer. These would be portable
to all implementations but perhaps not as efficient as hand-crafted routines
that avoided bignum arithmetic, used special machine instructions, avoided
computing to more precision than the machine really has, etc.
Proposal:
SCALE-FLOAT x e -> y
y = (* x (expt 2.0 e)) and is a float of the same type as x.
SCALE-FLOAT is more efficient than exponentiating and multiplying, and
also cannot overflow or underflow unless the final result (y) cannot
be represented.
x is also allowed to be a rational, in which case y is of the default
type (same as the FLOAT function).
[x being allowed to be a rational can be removed if anyone objects. But
note that this function has to be generic across the different float types
in any case, so it might as well be generic across all number types.]
UNSCALE-FLOAT y -> x e
The first value, x, is a float of the same type as y. The second value, e,
is an integer such that (= y (* x (expt 2.0 e))).
The magnitude of x is zero or between 1/b and 1 inclusive, where b is the
radix of the representation: 2 on most machines, but examples of 8 and
16, and I think 4, exist. x has the same sign as y.
It is an error if y is a rational rather than a float, or if y is an
infinity. (Leave infinity out of the Common Lisp manual, though).
It is not an error if y is zero.
FLOAT-MANTISSA x -> f
FLOAT-EXPONENT x -> e
FLOAT-SIGN x -> s
FLOAT-PRECISION x -> p
f is a non-negative integer, e is an integer, s is 1 or 0.
(= x (* (SCALE-FLOAT (FLOAT f x) e) (IF (ZEROP S) 1 -1))) is true.
It is up to the implementation whether f is the smallest possible integer
(zeros on the right are removed and e is increased), or f is an integer with
as many bits as the precision of the representation of x, or perhaps a "few"
more. The only thing guaranteed about f is that it is non-negative and
the above equality is true.
f is non-negative to avoid problems with minus zero. s is 1 for minus zero
even though MINUSP is not true of minus zero (otherwise the FLOAT-SIGN function
would be redundant).
p is an integer, the number of bits of precision in x. This is a constant
for each flonum representation type (except perhaps for variable-precision
"bigfloats").
[I am amenable to converting these four functions into one function that
returns four values if anyone can come up with a name. EXPLODE-FLOAT is
the best so far, and it's not very good, especially since the traditional
EXPLODE function has been flushed from Common Lisp. Perhaps DECODE-FLOAT.]
[I am amenable to adding a function that takes f, e, and s as arguments
and returns x. It might be called ENCODE-FLOAT or MAKE-FLOAT. It ought to
take either a type argument or an optional fourth argument, the way FLOAT
takes an optional second argument, which is an example of the type to return.]
FTRUNC x -> fp ip
The FTRUNC function as it is already defined provides the fraction-part and
integer-part operations.
These functions exist now in the Lisp machines, with different names and slightly
different semantics in some cases. They are very easy to write.
Comments? Suggestions for names?
Date: 4 October 1982 2355-EDT (Monday)
From: Guy.Steele at CMU-10A
I support Moon's proposal, but would like to suggest that FLOAT-SIGN
be modified to
(FLOAT-SIGN x &optional (y (float 1 x)))
returns z such that x and z have same sign and (= (abs y) (abs z)).
In this way (FLOAT-SIGN x) returns 1.0 or -1.0 of the same format as x,
and FLOAT-SIGN of two arguments is what the IEEE proposal calls COPYSIGN,
a useful function indeed in numerical code.
--Guy
----------------------------------------------------------------
27. Shall DEFMACRO, DEFSTRUCT, and other defining forms also be
allowed to take documentation strings as possible and appropriate?
(y) yes (n) no
28. Shall the following proposed revision of OPEN keywords be accepted?
(y) yes (n) no
----------------------------------------------------------------
Date: Monday, 4 October 1982, 17:08-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
OPEN takes a filename as its first argument. The rest of its arguments
are keyword/value pairs.
WITH-OPEN-STREAM's first subform is a list of a variable (to be bound to
a stream), a filename, and the rest of the elements are keyword/value
pairs.
The keywords are as follows, with their possible values and defaults:
:DIRECTION :INPUT (the default), :OUTPUT, :APPEND, :OVERWRITE, :PROBE
:INPUT - The file is expected to exist. Output operations are not allowed.
:OUTPUT - The file is expected to not exist. A new file is created. Input
operations are not allowed.
:APPEND - The file is expected to exist. Input operations are not allowed.
New characters are appened to the end of the existing file.
:OVERWRITE - The file is expected to exist. All operations are allowed.
The "file pointer" starts at the beginning of the file.
:PROBE - The file may or may not exist. Neither input nor output operations
are allowed. Furthermore, it is not necessary to close the stream.
:CHARACTERS T (the default), NIL, :DEFAULT
T - Open the file for reading/writing of characters.
NIL - Open the file for reading/writing of bytes (non-negative integers).
:DEFAULT - Let the file system decide, based on the file it finds.
:BYTE-SIZE a fixnum or :DEFAULT (the default)
a fixnum - Use this byte size.
:DEFAULT - Let the file system decide, based on the file it finds.
:IF-EXISTS :ERROR (the default), :NEW-VERSION, :RENAME,
:RENAME-AND-DELETE, :OVERWRITE, :APPEND, :REPLACE
Ignored if direction is not :OUTPUT. This tells what to do if the file
that you're trying to create already exists.
:ERROR - Signal an error.
:NEW-VERSION - Create a file with the same filename except with "latest" version.
:RENAME - Rename the existing file to something else and proceed.
:RENAME-AND-DELETE - Rename the existing file and delete (but don't expunge,
if your system has undeletion) it, and proceed.
:OVERWRITE - Open for :OVERWRITE instead. (If your file system doesn't have
this, use :RENAME-AND-DELETE if you have undeletion and :RENAME otherwise.)
:APPEND - Open for :APPEND instead.
:REPLACE - Replace the existing file, deleting it when the stream is closed.
:IF-DOES-NOT-EXIST :ERROR (the default), :CREATE
Ignored if direction is neither :APPEND nor :OVERWRITE
:ERROR - Signal an error.
:CREATE - Create the file and proceed.
Notes:
I renamed :READ-ALTER to :OVERWRITE; :READ-WRITE might also be good.
The :DEFAULT values are very useful, although some systems cannot figure
out this information. :CHARACTERS :DEFAULT is especially useful for
LOAD. Having the byte size come from the file only when the option is
missing, as the latest Common Lisp manual says, is undesirable because
it makes things harder for programs that are passing the value of that
keyword argument as computed from an expression.
Example of OPEN:
(OPEN "f:>dlw>lispm.init" :DIRECTION :OUTPUT)
Example of WITH-OPEN-FILE:
(WITH-OPEN-FILE (STREAM "f:>dlw>lispm.init" :DIRECTION :OUTPUT) ...)
OPEN can be kept Maclisp compatible by recognizing whether the second
argument is a list or not. Lisp Machine Lisp does this for the benefit
of old programs. The new syntax cannot be mistaken for the old one.
I removed :ECHO because we got rid of MAKE-ECHO-STREAM at the last
meeting.
Other options that the Lisp Machine will probably have, and which might
be candidates for Common Lisp, are: :INHIBIT-LINKS, :DELETED,
:PRESERVE-DATES, and :ESTIMATED-SIZE.
----------------------------------------------------------------
-------
∂13-Oct-82 1309 STEELE at CMU-20C Ballot results
Date: 13 Oct 1982 1608-EDT
From: STEELE at CMU-20C
Subject: Ballot results
To: common-lisp at SU-AI
?????????????????????????????????????????????????????????????????????????????
? %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ?
? % ================================================================= % ?
? % = $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ = % ?
? % = $ +++++++++++++++++++++++++++++++++++++++++++++++++++++ $ = % ?
? % = $ + ############################################### + $ = % ?
? % = $ + # ///////////////////////////////////////// # + $ = % ?
? % = $ + # / The October 1982 Common LISP Ballot / # + $ = % ?
? % = $ + # / RESULTS / # + $ = % ?
? % = $ + # ///////////////////////////////////////// # + $ = % ?
? % = $ + ############################################### + $ = % ?
? % = $ +++++++++++++++++++++++++++++++++++++++++++++++++++++ $ = % ?
? % = $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ = % ?
? % ================================================================= % ?
? %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ?
?????????????????????????????????????????????????????????????????????????????
Here are the tabulated votes on the October 1982 Common LISP Ballot. For
each issue the summary vote shown between "***" is what I take to be a
consensus, with a "?" added if I am a bit uncertain. I will edit the
manual according to these guidelines unless someone screams loudly and
soon over some issue. A few of the issues had a very mixed response;
these I have labeled "Controversial" and will take no immediate action on.
--Guy
1. How shall the case for a floating-point exponent specifier
output by PRINT and FORMAT be determined?
(a) upper case, for example 3.5E6
(b) lower case, for example 3.5e6
(c) a switch
(d) implementation-dependent
Issue 1: *** B? ***
Hedrick: B Wholey: - Fahlman: B Weinreb: B Killian: B
Zubkoff: C Moon: B van Roggen: D Masinter: A RMS: B
Dyer: B Bawden: - Feinberg: B Ginder: B Burke et al.: B
Brooks: - Gabriel: A DECLISP: B Steele: C Dill: D
Scherlis: - Pitman: B Anderson: B
2. Shall we change the name SETF to be SET? (y) yes (n) no
Issue 2: *** N ***
Hedrick: N Wholey: N Fahlman: N Weinreb: N Killian: X
Zubkoff: Y Moon: N van Roggen: Y Masinter: N RMS: N
Dyer: N Bawden: N Feinberg: N Ginder: N Burke et al.: N
Brooks: N Gabriel: N DECLISP: N Steele: N Dill: N
Scherlis: Y Pitman: N Anderson: N
Killian: I have been convinced that renaming SETF to SET would be
wrong because it would require changing lots of old code. But,
you seem to have ignored the rest of my suggestion in your
ballot, namely eliminating those horrid "F"s at the end of
several function names (INCF, GETF etc.). If you don't do this,
then you're being inconsistent by not naming PUSH PUSHF, etc.
The "F" at the end of "SETF" would then be there purely for
compatibility, and could be renamed when another Lisp dialect
is designed, years hence.
Pitman: I think we should do this, but not at this time.
RMS: I very strongly do not want to have to change uses of the
traditional function SET in the Lisp machine system.
Feinberg: A better name than SETF (or SET) should be found.
3. Shall there be a type specifier QUOTE, such that (QUOTE x) = (MEMBER x)?
Then MEMBER can be eliminated; (MEMBER x y z) = (OR 'x 'y 'z). Also one can
write such things as (OR INTEGER 'FOO) instead of (OR INTEGER (MEMBER FOO)).
(y) yes (n) no
Issue 3: *** Y? ***
Hedrick: X Wholey: Y Fahlman: N Weinreb: Y Killian: Y
Zubkoff: Y Moon: Y van Roggen: N Masinter: Y RMS: -
Dyer: X Bawden: Y Feinberg: Y Ginder: - Burke et al.: Y
Brooks: Y Gabriel: Y DECLISP: N Steele: Y Dill: Y
Scherlis: Y Pitman: Y Anderson: -
4. Shall MOON's proposal for LOAD keywords, revised as shown below, be used?
(y) yes (n) no
Issue 4: *** Y ***
Hedrick: Y Wholey: Y Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: Y Moon: Y van Roggen: Y Masinter: X RMS: -
Dyer: Y Bawden: Y Feinberg: Y Ginder: Y Burke et al.: Y
Brooks: Y Gabriel: Y DECLISP: Y Steele: Y Dill: X
Scherlis: Y Pitman: X Anderson: -
Moon: I thought we agreed to make LOAD take a stream as its first argument,
instead of a pathname, and flush the :STREAM keyword. :ERROR should
control only "file does not exist" errors, not "host down", "directory
does not exist", "illegal character in file name", "no access to file",
and "file cannot be read because of disk error". Nor should it affect
errors due to evaluation of forms in the file. So I think it needs a
better name; how about :NONEXISTENT-OK?
Masinter: :ERROR, :SET-DEFAULT-PATHNAME options to LOAD should be
rationalized with OPEN; the handling here of search paths should
logically be handled by passing on some of the options from LOAD to OPEN
rather than having LOAD do special path-name processing. This is because
users who manipulate files want to do similar hacking, and the mechanism
should be common.
Pitman: I would vote YES except: As suggested by someone when it was
proposed, any mention of packages should be stricken pending the release
of a package system specification.
Dill: :PACKAGE & :VERBOSE should be flushed, since they are package system
dependent.
5. Shall closures over dynamic variables be removed from Common LISP?
(y) yes (n) no
Issue 5: *** Y? ***
Hedrick: Y Wholey: Y Fahlman: Y Weinreb: N Killian: -
Zubkoff: - Moon: - van Roggen: Y Masinter: - RMS: -
Dyer: X Bawden: - Feinberg: Y Ginder: Y Burke et al.: -
Brooks: - Gabriel: N DECLISP: Y Steele: Y Dill: Y
Scherlis: Y Pitman: N Anderson: -
6. Shall LOOP, as summarized below, be included in Common LISP?
(y) yes (n) no
Issue 6: Controversial
Hedrick: N Wholey: N Fahlman: N Weinreb: Y Killian: Y
Zubkoff: X Moon: - van Roggen: N Masinter: X RMS: N
Dyer: Y Bawden: Y Feinberg: N Ginder: N Burke et al.: Y
Brooks: N Gabriel: X DECLISP: Y Steele: N Dill: N
Scherlis: N Pitman: N Anderson: N
Fahlman: I am in favor of adding the LOOP package as described (once it is
completed) to the language as a portable yellow pages module. I feel
rather strongly that it is premature to add LOOP to the white pages.
Zubkoff: The LOOP macro should be kept in the yellow pages until we've
had a chance to use it for a while and determine whether or not it is the
"right" thing.
Masinter: I feel strongly that the white pages SHOULD include a LOOP construct.
I care less about which one, but I like most of Moon's proposal better than DO
and what I saw of LetS. I'd get rid of AND and ELSE. I don't understand
if the "COLLECT" lexical scoping includes scoping under macro expansion.
Pitman: As a yellow-pages extension is ok by me. I strongly oppose its
placement in the white pages.
Feinberg: We should carefully examine all iteration construct proposals
before committing to any particular one. I feel strongly about
this. I would very much like to see complete documentation
on Loop and any other loop construct which might be included
in Common Lisp, especially before we decide to incorporate them
into the white pages.
Gabriel: I believe that a LOOP construct of some sort is needed: I am
constantly bumping into the limitations of MacLisp-style DO. The
Symbolics people claim that LOOP, as defined in the current proposal, is
well thought-out and indispensible. Not having used it particularly, I
cannot pass judgement on this. I advocate putting LOOP into the hottest
regions of the Yellow Pages, meaning that people should use it immediately
so that any improvements to clarity can be made rapidly. The best possible
LOOP should then be moved to the White Pages.
My prejudice is that LOOP code is very difficult to understand. On the
other hand, closures are difficult for many people to understand, and
perhaps the difficulty is due to unfamiliarity in the LOOP case as it is
in the closure case.
In my current programming I do not define my own iteration construct
(though I have in the past) because I've found that other people (such as
myself at a later date) do not readily understand my code when it contains
idiosyncratic control structures. If we do not standardize on a LOOP
construct soon we will be faced with the fact of many people defining
their own difficult-to-understand control structures.
7. Regardless of the outcome of the previous question, shall CYCLE
be retained and be renamed LOOP, with the understanding that statements
of the construct must be non-atomic, and atoms as "statements" are
reserved for extensions, and any such extensions must be compatible
with the basic mening as a pure iteration construct?
(y) yes (n) no
Issue 7: *** Y? ***
Hedrick: Y Wholey: - Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: - Moon: Y van Roggen: N Masinter: Y RMS: -
Dyer: Y Bawden: Y Feinberg: N Ginder: - Burke et al.: Y
Brooks: Y Gabriel: Y DECLISP: N Steele: Y Dill: X
Scherlis: Y Pitman: Y Anderson: N
Feinberg: I don't think we should make any commitment at all, even to this
extent. Loop is too nice a word to give up before we even agree about
installing it into the language.
8. Shall ARRAY-DIMENSION be changed by exchanging its arguments,
to have the array first and the axis number second, to parallel
other indexing operations?
(y) yes (n) no
Issue 8: *** Y ***
Hedrick: Y Wholey: Y Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: Y Moon: Y van Roggen: Y Masinter: - RMS: Y
Dyer: Y Bawden: - Feinberg: Y Ginder: Y Burke et al.: Y
Brooks: Y Gabriel: Y DECLISP: Y Steele: Y Dill: X
Scherlis: - Pitman: Y Anderson: Y
9. Shall MACROEXPAND, as described below, replace the current definition?
(y) yes (n) no
Issue 9: *** Y ***
Hedrick: Y Wholey: - Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: - Moon: Y van Roggen: Y Masinter: Y RMS: -
Dyer: Y Bawden: Y Feinberg: - Ginder: - Burke et al.: Y
Brooks: Y Gabriel: Y DECLISP: Y Steele: Y Dill: X
Scherlis: Y Pitman: X Anderson: -
Killian: This is ok as far as it goes, but I intend to suggest
additions when I find the time.
Masinter: This seems right but not quite fully specified, e.g. LAMBDA-MACRO.
Pitman: I would vote YES except:
I am uncomfortable with saying that a form returns two
values and then returning only one (letting the rest default to NIL).
Does Common-Lisp specify anything on this? In any case, I would ammend
the (cond ((and (pairp ...) ...) (values (...) t))
(t form))
to (cond (...) (t (values form nil))) to make it clear that two values
are always returned. If this modification is made, I am happy with this
proposal.
10. Shall all global system-defined variables have names beginning
and ending with "*", for example *PRINLEVEL* instead of PRINLEVEL
and *READ-DEFAULT-FLOAT-FORMAT* instead of READ←DEFAULT-FLOAT-FORMAT?
(y) yes (n) no
Issue 10: *** Y ***
Hedrick: Y Wholey: Y Fahlman: Y Weinreb: Y Killian: N
Zubkoff: Y Moon: Y van Roggen: Y Masinter: Y RMS: X
Dyer: N Bawden: N Feinberg: Y Ginder: Y Burke et al.: X
Brooks: Y Gabriel: Y DECLISP: Y Steele: Y Dill: X
Scherlis: Y Pitman: Y Anderson: Y
RMS: I would prefer a character other than *, such as "-".
It is easier to type, and easier to type correctly.
Bawden: I am in favor of variables named *FOO* over variables named FOO only
when that doesn't introduce an incompatibility with existing Lisps. That is
why I voted NO on 10, because it involved an incompatible change to variables
like PRINLEVEL. I voted YES for 11 because currently we have no named
constants as far as I know so there is no incompatibility.
Burke et al.: I really like only ONE "*" at the beginning of the name. I got
tired of shifting for two years ago, but conversely couldn't stand to
not have the specialness of the variable not be obvious.
11. Same question for named constants (other than T and NIL), such as
*PI* for PI and *MOST-POSITIVE-FIXNUM* for MOST-POSITIVE-FIXNUM.
(y) yes (n) no (o) yes, but use a character other than "*"
Issue 11: Controversial
Hedrick: Y Wholey: N Fahlman: Y Weinreb: Y Killian: N
Zubkoff: Y Moon: N van Roggen: Y Masinter: Y RMS: X
Dyer: N Bawden: Y Feinberg: Y Ginder: Y Burke et al.: X
Brooks: O Gabriel: Y DECLISP: Y Steele: N Dill: X
Scherlis: - Pitman: Y Anderson: Y
Fahlman: Whatever is done about global vars, global constants should be the
same. I oppose option 3 or any plan to make them look syntactically
different.
Moon: I like to use the stars to mean "someone may bind this" rather than
"don't use this as a local variable name", which is why I voted no on
putting stars around constants. However, others might disagree with me
and I will defer to the majority. I do think stars around variable names
are important.
12. Shall a checking form CHECK-TYPE be introduced as described below?
(y) yes (n) no
Issue 12: *** Y ***
Hedrick: Y Wholey: Y Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: Y Moon: Y van Roggen: Y Masinter: Y RMS: -
Dyer: Y Bawden: Y Feinberg: - Ginder: Y Burke et al.: Y
Brooks: Y Gabriel: Y DECLISP: Y Steele: Y Dill: Y
Scherlis: Y Pitman: Y Anderson: Y
13. Shall a checking form CHECK-SUBSEQUENCE be introduced as described below?
(y) yes (n) no
Issue 13: Controversial
Hedrick: N Wholey: - Fahlman: N Weinreb: - Killian: Y
Zubkoff: Y Moon: Y van Roggen: Y Masinter: - RMS: -
Dyer: N Bawden: - Feinberg: N Ginder: Y Burke et al.: N
Brooks: - Gabriel: Y DECLISP: Y Steele: Y Dill: N
Scherlis: Y Pitman: Y Anderson: Y
Feinberg: It seems like we're taking this type checking stuff a little
too far. Let the user write his own type checking code, or
make a yellow pages package called Carefully (or Lint) or
something.
Dill: There should be a succinct way about talking about the contents
of sequences, but this particular one doesn't have the right functionality.
I prefer a regular-expression notation of some form, but don't have it
well enough worked out to propose one. Lets leave it out of the language
until someone figures out how to do it well.
14. Shall the functions LINE-OUT and STRING-OUT, eliminated in November,
be reinstated?
(y) yes (n) no
Issue 14: *** Y ***
Hedrick: N Wholey: Y Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: - Moon: - van Roggen: Y Masinter: - RMS: -
Dyer: X Bawden: - Feinberg: Y Ginder: - Burke et al.: Y
Brooks: - Gabriel: - DECLISP: Y Steele: Y Dill: X
Scherlis: - Pitman: Y Anderson: -
15. Shall the REDUCE function be added as described below?
(y) yes (n) no
Issue 15: *** Y ***
Hedrick: N Wholey: - Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: Y Moon: Y van Roggen: Y Masinter: Y RMS: -
Dyer: Y Bawden: - Feinberg: Y Ginder: Y Burke et al.: Y
Brooks: Y Gabriel: Y DECLISP: Y Steele: Y Dill: Y
Scherlis: Y Pitman: - Anderson: N
Moon: Should the name be REDUCE, or something else? Hearn aside, the name
doesn't instantly convey to me what it does. I haven't come up with an
alternative suggestion, however.
Pitman: I have no use for this but have no strong objection.
16. Shall the Bawden/Moon solution to the "invisible block" problem
be accepted? The solution is to define (RETURN x) to mean precisely
(RETURN-FROM NIL x), and to specify that essentially all standard
iterators produce blocks named NIL. A block with a name other than
NIL cannot capture a RETURN, only a RETURN-FROM with a matching name.
(y) yes (n) no
Issue 16: *** Y ***
Hedrick: Y Wholey: - Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: Y Moon: Y van Roggen: Y Masinter: - RMS: N
Dyer: Y Bawden: Y Feinberg: Y Ginder: Y Burke et al.: Y
Brooks: Y Gabriel: Y DECLISP: Y Steele: Y Dill: X
Scherlis: Y Pitman: Y Anderson: -
RMS: I am strongly opposed to anything that would require me to find
all the named PROGs in the Lisp machine system which have simple
RETURNs that return from them. This would make a lot of extra work
for me. Please don't impose this on me.
Dill: It seems to me that it ought to be possible to exploit lexical
scoping to solve problems like this in a more general way. If this is
possible, then this proposeal is redundant.
17. Shall the TAGBODY construct be incorporated? This expresses just
the behavior of the GO aspect of a PROG. Any atoms in the body
are not evaluated, but serve as tags that may be specified to GO.
Tags have lexical scope and dynamic extent. TAGBODY always returns NIL.
(y) yes (n) no
Issue 17: *** Y ***
Hedrick: N Wholey: - Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: Y Moon: Y van Roggen: Y Masinter: Y RMS: X
Dyer: Y Bawden: Y Feinberg: Y Ginder: Y Burke et al.: Y
Brooks: Y Gabriel: Y DECLISP: Y Steele: Y Dill: N
Scherlis: - Pitman: Y Anderson: Y
RMS: Why must GOBODY [sic] always return NIL just because PROG does?
It is just as easy to make GOBODY return the value of the last form in
it. We can consider a PROG to expand into a GOBODY followed by a NIL.
Feinberg: A better name than TAGBODY should be found.
18. What shall be done about RESTART? The following alternatives seem to
be the most popular:
(a) Have no RESTART form.
(b) RESTART takes the name of a block. What happens when you say
(RESTART NIL) must be clarified for most iteration constructs.
(c) There is a new binding form called, say, RESTARTABLE.
Within (RESTARTABLE FOO . body), (RESTART FOO) acts as a jump
to the top of the body of the enclosing, matching RESTARTABLE form.
RESTART tags have lexical scope and dynamic extent.
Issue 18: *** A ***
Hedrick: A Wholey: A Fahlman: A Weinreb: A Killian: A
Zubkoff: A Moon: A van Roggen: A Masinter: A RMS: C
Dyer: A Bawden: A Feinberg: A Ginder: A Burke et al.: A
Brooks: A Gabriel: B DECLISP: A Steele: C Dill: X
Scherlis: - Pitman: C Anderson: A
Fahlman: I now believe that RESTART is more trouble than it is worth. I am
strongly opposed to any plan, such as option 3, that would add a RESTART
form but make it impossible to use this with the implicit block around a
DEFUN. If you have to introduce a RESTARTABLE block, you may as
well use PROG/GO.
19. Shall there be a built-in identity function, and if so, what shall it
be called?
(c) CR (i) IDENTITY (n) no such function
Issue 19: *** I ***
Hedrick: I Wholey: I Fahlman: I Weinreb: I Killian: -
Zubkoff: I Moon: I van Roggen: I Masinter: I RMS: I
Dyer: X Bawden: I Feinberg: I Ginder: I Burke et al.: I
Brooks: I Gabriel: - DECLISP: I Steele: I Dill: X
Scherlis: I Pitman: I Anderson: -
RMS: The canonical identity function is now called PROG1, but the name
IDENTITY is ok by me.
20. Shall the #*... bit-string syntax replace #"..."? That is, shall what
was before written #"10010" now be written #*10010 ?
(y) yes (n) no
Issue 20: *** Y ***
Hedrick: Y Wholey: - Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: Y Moon: Y van Roggen: Y Masinter: Y RMS: -
Dyer: X Bawden: Y Feinberg: Y Ginder: Y Burke et al.: Y
Brooks: Y Gabriel: N DECLISP: Y Steele: Y Dill: Y
Scherlis: Y Pitman: Y Anderson: Y
21. Which of the two outstanding array proposals (below) shall be adopted?
(s) the "simple" proposal
(r) the "RPG memorial" proposal
(m) the "simple" proposal as amended by Moon
Issue 21: *** M? ***
Hedrick: M Wholey: - Fahlman: M Weinreb: M Killian: M
Zubkoff: M Moon: M van Roggen: M Masinter: - RMS: M
Dyer: - Bawden: M Feinberg: M Ginder: M Burke et al.: M
Brooks: R Gabriel: X DECLISP: M Steele: M Dill: M
Scherlis: M Pitman: M Anderson: M
Brooks: if not "r" then I prefer "m".
Gabriel: I prefer the "RPG memorial", but I do not feel so strong
about this that I would sink the Common Lisp effort over it.
22. Shall the following proposal for the OPTIMIZE declaration be adopted?
(y) yes (n) no
Issue 22: *** Y ***
Hedrick: Y Wholey: - Fahlman: Y Weinreb: Y Killian: N
Zubkoff: Y Moon: Y van Roggen: Y Masinter: N RMS: -
Dyer: Y Bawden: - Feinberg: N Ginder: Y Burke et al.: Y
Brooks: - Gabriel: Y DECLISP: Y Steele: - Dill: X
Scherlis: N Pitman: X Anderson: X
Pitman: I would vote YES except:
The use of numbers instead of keywords bothers me. The section saying
which numbers can be which values and how those values will be interpreted
seems to FORTRANesque to me. I think these values should be just keywords
or the tight restrictions on their values should be lifted. The only use
for numbers would be to allow users a fluid range of possibilities.
Feinberg: Keywords instead of numbers would be nicer. How about
:dont-care, :low, :medium, :high?
Dill: I don't think that we need an optimize declaration in common lisp.
It's not necessary for portability, and intensely dependent on compiler
implementations. If we must have one, I strongly favor the Fahlman proposal
over proposals that would have symbolic specifications.
23. Shall it be permitted for macros calls to expand into DECLARE forms
and then be recognized as valid declarations?
This would not allows macros calls *within* a DECLARE form, only allow
macros to expand into a DECLARE form.
(y) yes (n) no
Issue 23: *** Y ***
Hedrick: Y Wholey: Y Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: Y Moon: Y van Roggen: Y Masinter: Y RMS: -
Dyer: Y Bawden: Y Feinberg: Y Ginder: - Burke et al.: Y
Brooks: Y Gabriel: Y DECLISP: Y Steele: Y Dill: X
Scherlis: Y Pitman: Y Anderson: Y
Pitman: I also support allowing multiple declare forms at the top of
a bind form. ie,
(LAMBDA (X Y) (DECLARE (SPECIAL X)) (DECLARE (SPECIAL Y))
for ease in macros. Steele's proposed evaluator did this and it wasn't
notably expensive.
24. Shall there be printer control variables ARRAY-PRINLEVEL and
ARRAY-PRINLENGTH to control printing of arrays? These would not
limit the printing of strings.
(y) yes (n) no
Issue 24: Controversial
Hedrick: N Wholey: Y Fahlman: N Weinreb: Y Killian: Y
Zubkoff: Y Moon: Y van Roggen: Y Masinter: N RMS: -
Dyer: Y Bawden: Y Feinberg: Y Ginder: Y Burke et al.: Y
Brooks: - Gabriel: N DECLISP: Y Steele: Y Dill: X
Scherlis: Y Pitman: N Anderson: Y
25. Shall lambda macros, as described below, be incorporated into
the language, and if so, shall they occupy the function name space
or a separate name space?
(f) function name space (s) separate name space (n) no lambda macros
Issue 25: Controversial
Hedrick: N Wholey: - Fahlman: N Weinreb: Y Killian: F
Zubkoff: - Moon: S van Roggen: S Masinter: D RMS: S
Dyer: S Bawden: S Feinberg: N Ginder: - Burke et al.: S
Brooks: N Gabriel: F DECLISP: S Steele: N Dill: N
Scherlis: - Pitman: S Anderson: N
Fahlman: I seem to be unable to locate any explanation of why Lambda macros
are useful enough to be worth the bother. Looks like needless hair to
me, but I seem to dimly recall some arguments for why they were needed.
I'm not passionately opposed, but every page full of hairy stuff in the
manual hurts us.
Masinter: Spec here not consistent with MACROEXPAND proposal.
Feinberg: Once again, hair that I don't think needs to be standardized on.
I think most users would never need this, and perhaps a better
way to do this can be thought of.
26. Shall the floating-point manipulations described below be adopted?
(y) as described by MOON
(a) as amended (FLOAT-SIGN changed) by GLS
(n) do not adopt them
Issue 26: *** A ***
Hedrick: A Wholey: A Fahlman: A Weinreb: A Killian: A
Zubkoff: A Moon: Y van Roggen: A Masinter: - RMS: -
Dyer: - Bawden: - Feinberg: - Ginder: A Burke et al.: -
Brooks: - Gabriel: A DECLISP: A Steele: A Dill: X
Scherlis: - Pitman: - Anderson: Y
Killian: Since TRUNC was renamed TRUNCATE at the last meeting, the
FTRUNC in this proposal would have to become FTRUNCATE.
27. Shall DEFMACRO, DEFSTRUCT, and other defining forms also be
allowed to take documentation strings as possible and appropriate?
(y) yes (n) no
Issue 27: *** Y ***
Hedrick: Y Wholey: Y Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: Y Moon: Y van Roggen: Y Masinter: Y RMS: Y
Dyer: Y Bawden: Y Feinberg: Y Ginder: Y Burke et al.: Y
Brooks: Y Gabriel: Y DECLISP: Y Steele: Y Dill: X
Scherlis: Y Pitman: Y Anderson: Y
28. Shall the following proposed revision of OPEN keywords be accepted?
(y) yes (n) no
Issue 28: *** Y ***
Hedrick: Y Wholey: Y Fahlman: Y Weinreb: Y Killian: Y
Zubkoff: Y Moon: Y van Roggen: Y Masinter: Y RMS: -
Dyer: Y Bawden: Y Feinberg: Y Ginder: Y Burke et al.: Y
Brooks: Y Gabriel: Y DECLISP: Y Steele: Y Dill: X
Scherlis: - Pitman: X Anderson: Y
DECLISP: Either READ-ALTER, READ-WRITE OR UPDATE should replace the :OVERWRITE
keyword for :DIRECTION. Overwrite suggests that an existing file will be
destroyed by having new data written into the same space.
-------
Then, any calls to the functional argument that
ε3fun-with-functional-argε* executes will pass arguments as if the
number of arguments did not matter.
end←defspec
defspec deflambda-macro
ε3deflambda-macroε* is like ε3defmacroε*, but defines a lambda macro
instead of a normal macro.
end←defspec
defspec deflambda-macro-displace
ε3deflambda-macro-displaceε* is like ε3defmacro-displaceε*, but defines
a lambda macro instead of a normal macro.
end←defspec
defspec deffunction function-spec lambda-macro-name lambda-list &body body
ε3deffunctionε* defines a function with an arbitrary lambda macro
instead of ε3lambdaε*. It takes arguments like ε3defunε*, expect that
the argument immediatly following the function specifier is the name of
the lambda macro to be used. ε3deffunctionε* expands the lambda macro
immediatly, so the lambda macro must have been previously defined.
For example:
lisp
(deffunction some-interlisp-like-function ilisp (x y z)
(list x y z))
end←lisp
would define a function called ε3some-interlisp-like-functionε*, that
would use the lambda macro called ε3ilispε*. Thus, the function would
do no number of arguments checking.
end←defspec
----------------------------------------------------------------
26. Shall the floating-point manipulations described below be adopted?
(y) as described by MOON
(a) as amended (FLOAT-SIGN changed) by GLS
(n) do not adopt them
----------------------------------------------------------------
Date: Thursday, 30 September 1982 05:55-EDT
From: MOON at SCRC-TENEX
I am not completely happy with the FLOAT-FRACTION, FLOAT-EXPONENT, and
SCALE-FLOAT functions in the Colander edition. At the meeting in August I
was assigned to make a proposal. I am slow.
A minor issue is that the range of FLOAT-FRACTION fails to include zero (of
course it has to), and is inclusive at both ends, which means that there
are two possible return values for some numbers. I guess that this ugliness
has to stay because some implementations require this freedom for hardware
reasons, and it doesn't make a big difference from a numerical analysis point
of view. My proposal is to include zero in the range and to add a note about
two possible values for numbers that are an exact power of the base.
A more major issue is that some applications that break down a flonum into
a fraction and an exponent, or assemble a flonum from a fraction and an
exponent, are best served by representing the fraction as a flonum, while
others are best served by representing it as an integer. An example of
the former is a numerical routine that scales its argument into a certain
range. An example of the latter is a printing routine that must do exact
integer arithmetic on the fraction.
In the agenda for the August meeting it was also proposed that there be
a function to return the precision of the representation of a given flonum
(presumably in bits); this would be in addition to the "epsilon" constants
described on page 143 of the Colander.
A goal of all this is to make it possible to write portable numeric functions,
such as the trigonometric functions and my debugged version of Steele's
totally accurate floating-point number printer. These would be portable
to all implementations but perhaps not as efficient as hand-crafted routines
that avoided bignum arithmetic, used special machine instructions, avoided
computing to more precision than the machine really has, etc.
Proposal:
SCALE-FLOAT x e -> y
y = (* x (expt 2.0 e)) and is a float of the same type as x.
SCALE-FLOAT is more efficient than exponentiating and multiplying, and
also cannot overflow or underflow unless the final result (y) cannot
be represented.
x is also allowed to be a rational, in which case y is of the default
type (same as the FLOAT function).
[x being allowed to be a rational can be removed if anyone objects. But
note that this function has to be generic across the different float types
in any case, so it might as well be generic across all number types.]
UNSCALE-FLOAT y -> x e
The first value, x, is a float of the same type as y. The second value, e,
is an integer such that (= y (* x (expt 2.0 e))).
The magnitude of x is zero or between 1/b and 1 inclusive, where b is the
radix of the representation: 2 on most machines, but examples of 8 and
16, and I think 4, exist. x has the same sign as y.
It is an error if y is a rational rather than a float, or if y is an
infinity. (Leave infinity out of the Common Lisp manual, though).
It is not an error if y is zero.
FLOAT-MANTISSA x -> f
FLOAT-EXPONENT x -> e
FLOAT-SIGN x -> s
FLOAT-PRECISION x -> p
f is a non-negative integer, e is an integer, s is 1 or 0.
(= x (* (SCALE-FLOAT (FLOAT f x) e) (IF (ZEROP S) 1 -1))) is true.
It is up to the implementation whether f is the smallest possible integer
(zeros on the right are removed and e is increased), or f is an integer with
as many bits as the precision of the representation of x, or perhaps a "few"
more. The only thing guaranteed about f is that it is non-negative and
the above equality is true.
f is non-negative to avoid problems with minus zero. s is 1 for minus zero
even though MINUSP is not true of minus zero (otherwise the FLOAT-SIGN function
would be redundant).
p is an integer, the number of bits of precision in x. This is a constant
for each flonum representation type (except perhaps for variable-precision
"bigfloats").
[I am amenable to converting these four functions into one function that
returns four values if anyone can come up with a name. EXPLODE-FLOAT is
the best so far, and it's not very good, especially since the traditional
EXPLODE function has been flushed from Common Lisp. Perhaps DECODE-FLOAT.]
[I am amenable to adding a function that takes f, e, and s as arguments
and returns x. It might be called ENCODE-FLOAT or MAKE-FLOAT. It ought to
take either a type argument or an optional fourth argument, the way FLOAT
takes an optional second argument, which is an example of the type to return.]
FTRUNC x -> fp ip
The FTRUNC function as it is already defined provides the fraction-part and
integer-part operations.
These functions exist now in the Lisp machines, with different names and slightly
different semantics in some cases. They are very easy to write.
Comments? Suggestions for names?
Date: 4 October 1982 2355-EDT (Monday)
From: Guy.Steele at CMU-10A
I support Moon's proposal, but would like to suggest that FLOAT-SIGN
be modified to
(FLOAT-SIGN x &optional (y (float 1 x)))
returns z such that x and z have same sign and (= (abs y) (abs z)).
In this way (FLOAT-SIGN x) returns 1.0 or -1.0 of the same format as x,
and FLOAT-SIGN of two arguments is what the IEEE proposal calls COPYSIGN,
a useful function indeed in numerical code.
--Guy
----------------------------------------------------------------
27. Shall DEFMACRO, DEFSTRUCT, and other defining forms also be
allowed to take documentation strings as possible and appropriate?
(y) yes (n) no
28. Shall the following proposed revision of OPEN keywords be accepted?
(y) yes (n) no
----------------------------------------------------------------
Date: Monday, 4 October 1982, 17:08-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>
OPEN takes a filename as its first argument. The rest of its arguments
are keyword/value pairs.
WITH-OPEN-STREAM's first subform is a list of a variable (to be bound to
a stream), a filename, and the rest of the elements are keyword/value
pairs.
The keywords are as follows, with their possible values and defaults:
:DIRECTION :INPUT (the default), :OUTPUT, :APPEND, :OVERWRITE, :PROBE
:INPUT - The file is expected to exist. Output operations are not allowed.
:OUTPUT - The file is expected to not exist. A new file is created. Input
operations are not allowed.
:APPEND - The file is expected to exist. Input operations are not allowed.
New characters are appened to the end of the existing file.
:OVERWRITE - The file is expected to exist. All operations are allowed.
The "file pointer" starts at the beginning of the file.
:PROBE - The file may or may not exist. Neither input nor output operations
are allowed. Furthermore, it is not necessary to close the stream.
:CHARACTERS T (the default), NIL, :DEFAULT
T - Open the file for reading/writing of characters.
NIL - Open the file for reading/writing of bytes (non-negative integers).
:DEFAULT - Let the file system decide, based on the file it finds.
:BYTE-SIZE a fixnum or :DEFAULT (the default)
a fixnum - Use this byte size.
:DEFAULT - Let the file system decide, based on the file it finds.
:IF-EXISTS :ERROR (the default), :NEW-VERSION, :RENAME,
:RENAME-AND-DELETE, :OVERWRITE, :APPEND, :REPLACE
Ignored if direction is not :OUTPUT. This tells what to do if the file
that you're trying to create already exists.
:ERROR - Signal an error.
:NEW-VERSION - Create a file with the same filename except with "latest" version.
:RENAME - Rename the existing file to something else and proceed.
:RENAME-AND-DELETE - Rename the existing file and delete (but don't expunge,
if your system has undeletion) it, and proceed.
:OVERWRITE - Open for :OVERWRITE instead. (If your file system doesn't have
this, use :RENAME-AND-DELETE if you have undeletion and :RENAME otherwise.)
:APPEND - Open for :APPEND instead.
:REPLACE - Replace the existing file, deleting it when the stream is closed.
:IF-DOES-NOT-EXIST :ERROR (the default), :CREATE
Ignored if direction is neither :APPEND nor :OVERWRITE
:ERROR - Signal an error.
:CREATE - Create the file and proceed.
Notes:
I renamed :READ-ALTER to :OVERWRITE; :READ-WRITE might also be good.
The :DEFAULT values are very useful, although some systems cannot figure
out this information. :CHARACTERS :DEFAULT is especially useful for
LOAD. Having the byte size come from the file only when the option is
missing, as the latest Common Lisp manual says, is undesirable because
it makes things harder for programs that are passing the value of that
keyword argument as computed from an expression.
Example of OPEN:
(OPEN "f:>dlw>lispm.init" :DIRECTION :OUTPUT)
Example of WITH-OPEN-FILE:
(WITH-OPEN-FILE (STREAM "f:>dlw>lispm.init" :DIRECTION :OUTPUT) ...)
OPEN can be kept Maclisp compatible by recognizing whether the second
argument is a list or not. Lisp Machine Lisp does this for the benefit
of old programs. The new syntax cannot be mistaken for the old one.
I removed :ECHO because we got rid of MAKE-ECHO-STREAM at the last
meeting.
Other options that the Lisp Machine will probably have, and which might
be candidates for Common Lisp, are: :INHIBIT-LINKS, :DELETED,
:PRESERVE-DATES, and :ESTIMATED-SIZE.
----------------------------------------------------------------
-------
∂14-Aug-83 1216 FAHLMAN@CMU-CS-C.ARPA Things to do
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 14 Aug 83 12:16:28 PDT
Received: ID <FAHLMAN@CMU-CS-C.ARPA>; Sun 14 Aug 83 15:16:47-EDT
Date: Sun, 14 Aug 1983 15:16 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
To: common-lisp @ SU-AI.ARPA
Subject: Things to do
A bunch of things were put off without decisions or were patched over in
the effort to get agreement on the first edition. Most of the people
who have been intensively involved in the language design will be tied
up for another couple of months getting their implementations up to spec
and tweaking them for performance. However, it is perhaps not too soon
to begin thinking about what major additions/changes we want to get into
the second edition, so that those who want to make proposals can begin
preparing them and so that people can make their plans in light of what
is likely to be coming.
Here's a list of the major things that I see on the agenda for the next
year or so. Some are yellow-pages packages, some have deep roots
and require white-pages support, and some are so pervasive that they
will probably migrate into the white pages after a probationary period
in yellow-land. I'm sure I'm forgetting a few things that have already
been suggested. I'm also sure that people will have some additional
proposals to make. I am not including very minor and trivial changes
that we might want to make in the language as we gain some experience
with it.
1. Someone needs to implement the transcendental functions for complex
numbers in a portable way so that we can all use these. The functions
should be parameterized so that they will work for all the various
floating-point precisions that implementations might offer. The design
should be uncontroversial, since it is already specified in the manual.
I don't think we have any volunteers to do this at present.
2. We need to re-think the issue of function specs, and agree on what
should go into the white pages next time around. Moon's earlier
proposal, or some subset of it, is probably what we want to go with.
3. At one point HIC offered to propose a minimal set of white-pages
support for efficient implementation of a portable flavor system, and to
supply the portable part. The white-pages support would also be usable
by other object-oriented paradigms with different inheritance schemes
(that's the controversial part). After a brief exchange of messages,
HIC got super-busy on other matters and we haven't heard much since
then. Either HIC or someone else needs to finish this proposal, so that
we can put in the low-level support and begin playing with the portable
implementation of flavors. Only after more Common Lisp users have had
some opportunity to play with flavors will it make sense to consider
including them (or some variation) in the white pages. There is a lot
of interest in this out in user-land.
4. We need some sort of iteration facility more powerful than DO. The
existing proposals are some extensively cleaned-up revision of LOOP and
Dick Waters' LETS package. There may be some other ideas out there as
well. Probably the best way to proceed here is for the proponents of
each style to implement their package portably for the yellow pages and
let the customers decide what they like. If a clear favorite emerges,
it will probably be absorbed into the white pages, though this would not
preclude personal use of the other style. None of these things requires
white-pages support -- it is just a matter of what we want to encourage
users to use, and how strongly.
5. A good, portable, user-modifiable pretty printer is needed, and if it
were done well enough I see no reason not to put the user-visible
interface in the white pages next time around. Waters' GPRINT is one
candidate, and is being adopted as an interim pretty-printer by DEC.
The last time I looked, the code for that package was impenetrable and
the interface to it was excessively hairy, but I've heard that it has
been simplified. Maybe this is what we want to go with. Other options?
6. We need to work out the business of taxonomic error-handling. Moon
has a proposal in mind, I believe. A possible problem is that this
wants to be white-pages, so if it depends on flavors it gets tied up
with the issue of making flavors white-pages as well.
7. The Hemlock editor, a public-domain Emacs-clone written in portable
Common Lisp, is now running on the Perq and Vax implementations. We
have a lot of additional commands and modes to implement and some tuning
to do, but that should happen fairly rapidly over the next few months.
Of course, this cannot just be moved verbatim to a new implementation
and run there, since it interacts closely with screen-management and
with the file system. Once Hemlock is polished, it will provide a
reasonable minimum editing/top-level environment for any Common Lisp
implementation that takes the trouble to adapt it to the local system.
This should eliminate the need for hairy rubout-handlers, interlispy
top-levels, S-expression editors, and some other "environment" packages.
We plan to add some version of "info mode" at some point and to get the
Common Lisp Manual and yellow pages documents set up for tree-structured
access by this package, but that won't happen right away.
8. Someone ought to put together a reasonable package of macro-writer's
aids: functions that know which things can be evaluated multiple times
without producing side-effects, type-analysis hacks, and other such
goodies.
If you have items to add to this list, let me know.
-- Scott
∂18-Aug-83 1006 @MIT-MC:benson@SCRC-TENEX What to do next
Received: from MIT-MC by SU-AI with TCP/SMTP; 18 Aug 83 10:06:04 PDT
Date: Thursday, 18 August 1983 11:54-EDT
From: dlw at SCRC-TENEX, benson at SCRC-TENEX
Subject: What to do next
To: fahlman at cmuc
Cc: common-lisp at su-ai
Scott, I appreciated your summary of pending issues in Common Lisp, and
I certainly think we should proceed to work on these things. However, I
think that the "next things to do", after we get out the initial real
Common Lisp manual, are:
(1) Create a Common Lisp Virtual Machine specification, and gather a
body of public domain Lisp code which, when loaded into a proto-Lisp
that meets the spec, produces a complete Common Lisp interpreter that
meets the full language spec. (This doesn't address the portable
compiler problem.)
(2) Establish an official Common Lisp subset, suitable for
implementation on 16-bit microcomputers such as the 68000 and the 8088.
I understand that Gabriel is interested in 68000 implementations, and I
am trying to interest Bob Rorscharch (who implemented IQLISP, which is
an IBM PC implementation of a UCILISP subset) in converting his product
into a Common Lisp implementation.
There are a lot of problems with subsetting. You can't leave out
multiple values, beacuse several primitives return multiple values and
you don't want to omit all of these primitives (and you don't want to
discourage the addition of new primitives that return multiple values,
in future versions of Common Lisp). You can't leave out packages, at
least not entirely, because keywords are essential to many functions.
And many things, if removed, would have to be replaced by something
significantly less clean. We'd ideally like to remove things that (a)
can be removed without creating the need for an unclean simpler
substitute, and (b) aren't used by the rest of the system. In other
words, we have to find modular chunks to break off. And, of course,
any problem that works in the subset has to work and do exactly the
same thing in full Common Lisp, unless the program has some error
(in the "it is an error" sense). The decision as to what goes
in and what goes out should be made in light of the fact that
an implementation might be heavily into "autoloading".
Complex numbers can easily be omitted.
The requirement for all the floating point precisions can be
omitted. Of course, Common Lisp is flexiable in this regard anyway.
Rational numbers could be left out. They aren't hard, per se, but
they're just another thing to do. The "/" function on two integers
would have to signal an error.
Packages could be trimmed down to only be a feature that supplies
keywords; most of the package system might be removable.
Lexical scoping might possibly be removable. You could remove support
for LABELS, FLET, and MACROLET. You can't remove internal functions
entirely (i.e. MAPCAR of a lambda-expression can't be removed) but they
might have some restrictions on them.
Adjustable arrays could be removed. Fill pointers could go too,
although it's not clear that it's worth it. In the extreme, you could
only have simple arrays. You could even remove multi-D arrays
entirely, or only 1-D and 2-D.
Several functions look like they might be big, and aren't really
required. Some candidates: COERCE, TYPE-OF, the hard version
of DEFSETF (whatever you call it), LOOP,
TYPEP and SUBTYPEP are hard to do, but it's hard to see how
to get rid of the typing system! SUBTYPEP itself might go.
Multiple values would be a great thing to get rid of in the subset, but
there are the Common Lisp primitives that use multiple values. Perhaps
we should add new primitives that return these second values only, for
the benefit of the subset, or something.
Catch, throw, and unwind-protect could be removed, although they're
sure hard to live without.
Lots of numeric stuff is non-critical: GCD, LCM, CONJUGATE, the
exponentials and trascendentals, rationalize, byte manipulation, random
numbers.
The sequence functions are a lot of work and take a lot of room in your
machine. It would be nice to do something about this. Unfortunately,
simply omitting all the sequence functions takes away valuable basic
functionality such as MEMQ. Perhaps the subset could leave out some of
the keywords, like :test and :test-not and :from-end.
Hash tables are not strictly necessary, although the system itself
are likely to want to use some kind of hash tables somewhere,
maybe not the user-visible ones.
Maybe some of the defstruct options could be omitted, though I don't
think that getting rid of defstruct entirely would be acceptable.
Some of the make-xxx-stream functions are unnecessary.
Some of the hairy reader syntax is not strictly necessary. The circular
structure stuff and load-time evaluation are the main candidates.
The stuff to allow manipulation of readtables is not strictly necessary,
or could be partially restricted.
Some of the hairy format options could be omitted. I won't go into
detail on this.
Some of the hairy OPEN options could go, although I'd hate to be the one
to decide which options are the non-critical ones. Also some of the
file operations (rename, delete, attribute manipulation) could go.
The debugging tools might be optional although probably they just
get autoloaded anyway.
∂23-Mar-84 2248 GS70@CMU-CS-A Common Lisp Reference Manual
Received: from CMU-CS-A.ARPA by SU-AI.ARPA with TCP; 23 Mar 84 22:48:17 PST
Date: 24 Mar 84 0130 EST (Saturday)
From: Guy.Steele@CMU-CS-A
To: common-lisp@SU-AI
Subject: Common Lisp Reference Manual
The publisher of the Common Lisp Reference Manual is Digital Press. I
understand that they may be negotiating with manufacturers to allow
them to reprint the manual in various ways as part of their product
documentation. I am leaving the business and legal aspects of this up
to Digital Press, and referring all corporate inquiries to them. My
goal is primarily to ensure that (a) no one publishes a manual that
claims to be about Common Lisp when it doesn't satisfy the Common Lisp
specifications, and (b) to make sure that everyone involved in the
Common Lisp effort is properly credited. I certainly do not want to
block anyone from implementing or using Common Lisp, or even a subset,
superset, or side-set of Common Lisp, as long as any differences are
clearly and correctly stated in the relevant documentation, and as long
as the Common Lisp Reference Manual is recognized and credited as the
definitive document on Common Lisp. This requires a certain balance
between free permission and tight control. This is why I am letting
the publisher handle it; they almost certainly have more experience
with such things than I do.
I have asked the editor at Digital Press to arrange for complimentary
copies to be sent to everyone who has contributed substantially to the
Common Lisp effort. This will include most of the people on this
mailing list, I imagine. The set of people I have in mind is listed in
the acknowledgements of the book--seventy or eighty persons
altogether--so if you see a copy of the book and find your name in that
list, you might want to wait a bit for your complimentary copy to show
up before buying one. (Because of the large number of copies involved,
they aren't really complimentary, which is to say the publisher isn't
footing the cost: the cost of them will be paid out of the royalties.
I estimate that the royalties from the entire first print run will just
about cover these free copies. It seems only fair to me that everyone
who contributed to the language design should get a copy of the final
version!)
The nominal schedule calls for the typesetter to take about five weeks
to produce camera-ready copy from the files I sent to them on magnetic
tape. The process of printing, binding, and distribution will then take
another four to five weeks. So at this point we're talking availability
at about the end of May. This is a tight and optimistic schedule; don't
blame Digital Press if it slides. (I'm certainly not about to cast any
stones!) Unless you're an implementor wanting to order a thousand
copies to distribute with your system, please don't bother the folks at
Digital Press until then; they've got enough problems. I'll send more
information to this mailing list as the date approaches.
One last note. The book is about 400 pages of 8.5" by 11" Dover output.
Apparently the publisher and typesetter decided that this made the lines
too wide for easy reading, so they will use a 6" by 9" format. This
will make the shape of the book approximately cubical. Now, there are
26 chapters counting the index, and a Rubik's cube has 26 exterior cubies.
I'll let you individually extrapolate and fantasize from there.
--Guy
∂20-Jun-84 2152 GS70@CMU-CS-A.ARPA "ANSI Lisp" rumor
Received: from CMU-CS-A.ARPA by SU-AI.ARPA with TCP; 20 Jun 84 21:52:02 PDT
Date: 21 Jun 84 0050 EDT (Thursday)
From: Guy.Steele@CMU-CS-A.ARPA
To: masinter.pa@XEROX.ARPA
Subject: "ANSI Lisp" rumor
CC: common-lisp@SU-AI.ARPA
In-Reply-To: "masinter.pa@XEROX.ARPA's message of 2 Jun 84 20:55-EST"
I do not know of any official effort within ANSI to do anything at all
about LISP. Here is what I do know: I have been told that a group
in China has suggested that perhaps an ISO standard for LISP should
be promulgated. I know nothing more about it than that. However,
at the request of David Wise and J.A.N. Lee, I have sent a copy of
the Common LISP Manual to J.A.N. Lee, who has been involved with
standards of various kinds at ACM. (David Wise is a member of the SIGPLAN
council, or whatever it is called, and is the LISP expert within that
body.) The idea is that if either an ISO or ANSI standards effort
were to be undertaken, Wise and Lee feel that such an effort should
certainly take the work done on Common LISP into account, and they want
people in the standards organizations to be aware of the Common LISP
work. I sent a copy of the Table of Contents to Lee several months
ago, and it was my understanding that he might choose to circulate
copies of it to, for example, members of the SIGPLAN council.
That's where it stands. I repeat, I know of no effort actually to
start a standards effort; Wise and Lee merely want certain people to
be prepared by already having information about Common LISP if any of
a number of possible developments should unfold.
--Guy